Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome back to the deep dive. If you've ever looked
at a big tech project and kind of wondered, how
do they actually go from just a vague idea to
working software, Well, this deep dive is definitely for you.
We're pulling apart the blueprint Systems Analysis and Design SAD
and its roadmap, the systems development life cycle or SDLC exactly.
Speaker 2 (00:22):
And when we talk it information technology, it's really more
than just software, right. It's the hardware, the software, the
service is all working together managing information sharing it. Our
focus today isn't the nitty gritty coding details. It's the
framework itself, the thing that decides if a project sinks
or swims, often way before anyone writes a single line
of code.
Speaker 1 (00:42):
We want to give you that shortcut to understanding the
big strategic choices that really impact the budget and frankly,
whether the systems even useful long term.
Speaker 2 (00:49):
And we're zeroing in on those five main phases of
the SDLC planning, analysis, design, implementation, and then support and security.
Speaker 1 (00:57):
We'll look at what each phase delivers and importantly, how
those very early decisions connect to the total cost of
ownership down the line. Okay, so let's lay out the
basic structure. First, we've got these five phases and each
one produces something tangible, right, a deliverable.
Speaker 2 (01:13):
That's right. It's a formal process, a cycle. You start
with Phase one systems planning. The output that's the Polmary
investigation report, kind of the first assessment.
Speaker 1 (01:22):
Got it initial look.
Speaker 2 (01:23):
Then you move into Phase two systems analysis. This is
where you figure out what the system needs to do,
all captured in the system requirements document.
Speaker 1 (01:30):
That sounds incredibly important. The requirement stock.
Speaker 2 (01:33):
Oh, it's the absolute foundation. Everything builds on that. It
feeds directly into Phase three systems design. Now we're talking
how how's it built? The architecture databases.
Speaker 1 (01:41):
Interphases and that's documented to YEP.
Speaker 2 (01:44):
In the system design specification. Then Phase four is systems implementation, building, testing,
installing it. The deliverable is the actual functioning system. And
finally Phase five systems support insecurity. This is about keeping
it running, keeping it secure, making sure you have a
fully operational system for well, hopefully years.
Speaker 1 (02:03):
Okay, so that's the framework, but within that, teams approach
the actual building Differently. There are what three main methodologies
for how they handled data and process.
Speaker 2 (02:13):
Broadly speaking, yes. Historically the starting point was structured analysis.
The key thing here is processes and data treated as
totally separate things. You model them independently, hmmm.
Speaker 1 (02:25):
Which feels a little unnatural. I mean, in the real world,
data and what you do with it seem linked.
Speaker 2 (02:31):
Exactly. That realization led to object oriented analysis or ooh.
This approach bundles data and the actions on that data
together into these things called objects objects, right, And the
real power isn't just the fancy terms like objects having
properties or sending messages. It's about reuse. If you model say,
customer order as an object, you can potentially reuse that
(02:53):
whole chunk of logic elsewhere.
Speaker 1 (02:54):
Ah. So it cuts down on future work, reduces that
long term.
Speaker 2 (02:57):
Cost precisely, it helps manage complex city and can lower
the TCO from the start by making bits interchangeable. And
then there's the approach everyone's talking about now, agile methods.
Speaker 1 (03:08):
Angel Yeah, less of a strict waterfall, more iterative exactly.
Speaker 2 (03:12):
It uses a rapid incremental, almost spiral model. You build
working pieces quickly, maybe in two week sprints, and you're
constantly tweaking based on user feedback.
Speaker 1 (03:21):
So the big advantage of agile is it tackles changing
requirements head on. It sort of expects things to change.
Speaker 2 (03:28):
It's designed for that. It assumes the initial requirements won't
be perfectly stable. It builds in that flexibility, reducing the
risk of building something obsolete the day it launches.
Speaker 1 (03:37):
Okay, So, whether using structured O or agile, it all
kicks off with the system's request. Right, someone identifies a
problem or a need.
Speaker 2 (03:45):
Yes, but just saying we need a new system isn't enough.
You need a solid business case, the justification, why are
we doing this?
Speaker 1 (03:51):
And what usually drives that request? What are the common reasons?
Speaker 2 (03:55):
Well, they're usually about six main drivers we see. First,
needing stronger controls, better security, maybe compliance. Second pretty common,
reduce cost automation efficiency makes sense. Third, needing more information,
better reporting insights from managers. Fourth and fifth often relate
to existing systems, maybe lacking flexibility or just being difficult to.
Speaker 1 (04:18):
Learn old clunky systems.
Speaker 2 (04:20):
Yeah, and finally, sometimes it's needed to meet a bigger
strategic goal for the whole company.
Speaker 1 (04:25):
So once you have that driver, that business case, you
head a crucial checkpoint, the feasibility study before spending real money.
Speaker 2 (04:33):
Absolutely, you have check if it's even viable. We look
at it from four angles, operational, technical, schedule, and economic feasibility.
Speaker 1 (04:40):
Operational feasibility, That one sounds a bit fuzzy, like will
people use it? How do you actually measure that? Isn't
it just a feeling?
Speaker 2 (04:46):
It shouldn't be just a feeling. It needs a proper
assessment of the organizational culture. Will it fit how people
actually work? You maysure it by talking to users, looking
at training needs, seeing if it's too disruptive.
Speaker 1 (04:58):
Okay, so it's about the humans definitely.
Speaker 2 (05:01):
Technical feasibility is more straightforward. Do we have the tech,
the hardware, software, the right people, schedule feasibility, can we
realistically do it in the proposed timeframe?
Speaker 1 (05:11):
And then the big one economic feasibility, Do the benefits
outweigh the costs? Which brings us straight to total cost
of ownership TCO.
Speaker 2 (05:20):
TCO is absolutely central here and it's where estimates often
go wrong. You have to look beyond the initial purchase price.
Speaker 1 (05:27):
Right, not just the sticker price.
Speaker 2 (05:29):
No, you need to include the indirect costs user support,
ongoing training, system admin time, and critically lost productivity if
the system goes down or is just poorly designed. If
you ignore those, your economic case is basically fiction.
Speaker 1 (05:43):
We see lots of talk about moving legacy stuff to
the cloud to cut TCO. Does that generally work out?
Speaker 2 (05:48):
It often does, yeah, because you shift from big upfront
capital costs to ongoing operational costs pay as you go.
But there's a hidden TCO risk there too, vendor lock.
Speaker 1 (05:58):
In getting stuff with one provider exactly.
Speaker 2 (06:01):
You might save on hardware, but now you're dependent on
their pricing, their service levels. If you haven't planned an
exit strategy or maybe a multi cloud approach, those long
term costs could creep up unexpectedly.
Speaker 1 (06:13):
All right, So let's say the project looks feasible. Now
we dive deep into analysis figuring out precisely what this
thing needs to do. We need good requirements.
Speaker 2 (06:22):
And good means three things. They have to be valid,
actually support what the business needs, consistent, no contradictions, and complete,
nothing major missed.
Speaker 1 (06:34):
Easier said than done, I imagine.
Speaker 2 (06:36):
Oh absolutely. This is where you run into the classic
problem of feature creep. Requirements just keep expanding or changing
after you've already started building. It's probably the number one
killer of project schedules and budgets.
Speaker 1 (06:46):
Feature creep. Yeah, I remember one project. The system was
almost done, then suddenly a senior manager decides it needs
to integrate with some obscure custom labeling machine no I
mentioned before.
Speaker 2 (06:56):
That's a perfect painful example. It highlights why fact finding
is so crucial. When analysts do interviews, they can't just
ask what they need to ask. Why get to those
hidden assumptions.
Speaker 1 (07:08):
Engaged listening is key, and sometimes you just need to
watch people work.
Speaker 2 (07:12):
Observation yes, but you have to be aware of the
Hawthorn effect. People act differently when they know they're being watched.
Speaker 1 (07:19):
Right, they might follow the official process perfectly for you,
but that's not how they usually do it.
Speaker 2 (07:23):
Exactly, so you might need multiple observations, maybe compare the
official way with how things actually get done day to day.
We also use sampling, collecting actual documents, maybe systematically, like
grabbing every tenth invoice to get unbiased data and to.
Speaker 1 (07:38):
Fight that future creep and get everyone on the same
page early.
Speaker 2 (07:41):
There's JD Joint Application Development. Yeah. JD's sessions are great.
You lock users, managers, IT folks in a room together
to hammer out the requirements collaboratively.
Speaker 1 (07:51):
The big win there is user buy in.
Speaker 2 (07:53):
Huge win. IT fosters a real sense of user ownership.
If they help define the requirements, it's less likely to
complain or demand big changes later. It de risks the
whole requirements phase significantly.
Speaker 1 (08:06):
Okay, so you gather all this info, how do you
make sense of it turn it into something usable for design?
That's where modeling comes in, right.
Speaker 2 (08:13):
We create models. There's the logical model, that's the what
what functions does the system need, regardless of technology. Then
there's the physical model that's the how which specific software,
hardware networks are we using.
Speaker 1 (08:26):
In tools like data flow diagrams help visualize that exactly.
Speaker 2 (08:29):
Dfdes context diagrams. They map out how data moves through
the system, how it gets transformed. They turn those pages
of notes into a visual plan and architecture.
Speaker 1 (08:38):
Moving into the design phase, now we're thinking about the
internal structure the data, but also how users will actually
interact with it. Let's start with data quality. Why is
having the same info in multiple places like that Mario's
Autoshop example such a bad idea?
Speaker 2 (08:53):
Data redundancy. It's a recipe for disaster, basically because it
guarantees inconsistency. If Mario updates some mechanic's pay rate in
one file, but forgets the other five places it's stored.
Speaker 1 (09:03):
Your reports are wrong, your payroll is wrong.
Speaker 2 (09:07):
Chaos precisely. The fix is normalization. It's a technique to
structure your data properly.
Speaker 1 (09:13):
Ah, normalization often gets technical fast, especially third normal form
three and a F. That definition, every non key field
depends on the key, the whole key, and nothing but
the key. Can you translate that?
Speaker 2 (09:25):
Huh? Yeah, that's pure database talk. Let's simplify the core
idea of three and F is store each piece of
information once in one place. Make sure every bit of
data in a table is directly related only to the
main identifier for that table, like the customer ID or
order number.
Speaker 1 (09:39):
So no repeating addresses or names all over the place exactly.
Speaker 2 (09:43):
It eliminates those inconsistency problems and makes updating data way
easier and safer.
Speaker 1 (09:47):
Okay, data storted. Now the user side, the user interface
or UI. IBM had that concept of a transparent interface.
Speaker 2 (09:54):
Yeah. The idea is the user should see right through
the interface to their own work. The UI shouldn't get
in the way. To be user centered intuitive, people should
spend their brain power on their tasks, not fighting the software.
Good UI boost productivity.
Speaker 1 (10:08):
And it helps prevent bad data getting in the whole
garbage in garbage out idea Gi Joe.
Speaker 2 (10:14):
Directly good UI design enforces data quality through validation rules.
These are automatic.
Speaker 1 (10:21):
Checks like what kind of checks?
Speaker 2 (10:22):
Well, simple things like a range check making sure an
order quantity isn't say negative or ridiculously large, or a
validity check ensuring a state code entered is actually one
of the valid us date codes, or maybe a sequence
check for batch processing.
Speaker 1 (10:38):
Little checks that make a big difference to data accuracy.
Speaker 2 (10:41):
Huge difference. They're essential quality control built right into the design.
Speaker 1 (10:45):
And nowadays we have to design for phones, tablets, desktops, everything.
Speaker 2 (10:50):
That's where responsive web design is crucial. It's basically a
must have. Now the system has to look good and
work properly on any screen size. If your field texts
can't easily use it on their tablets, you've got a
usability failure right there.
Speaker 1 (11:02):
Okay, Phase four implementation. We've planned, analyzed, designed, Now we
actually build it, test it and roll it out. This
feels like where budgets can really blow up if the
early work wasn't solid.
Speaker 2 (11:15):
Oh, definitely, the quality of your requirements back in phase two,
directly impacts how painful and expensive testing is. Now, testing
isn't just one thing either. We break it down how so. Well,
First there's unit testing. Programmers test their own individual pieces,
the modules they wrote, make sure that specific.
Speaker 1 (11:32):
Bit works okay, testing the small parts.
Speaker 2 (11:34):
Then integration testing, does module A talk correctly to module B?
Do the different pieces work together as expected? Checking the
connections exactly? And finally system testing This is the big one.
Does the entire system and to end meet all those
requirements we define way back in the system requirements document.
Speaker 1 (11:51):
So assuming it passes all the tests, we have to
actually switch it on for real users. The cutover sounds risky, It.
Speaker 2 (11:57):
Can be very risky. Choosing the right strategy is key.
There are a few main ways to do it. The riskiest,
but fastest and cheapest is direct cutover.
Speaker 1 (12:05):
Just flick the switch, turn off the old, turn on.
Speaker 2 (12:06):
The new, yep, big bang. If the new system has problems,
well you've got a big problem. Business might grind to
a halt ouch.
Speaker 1 (12:15):
What's safer?
Speaker 2 (12:15):
A pilot implementation is safer. You roll out the new
system to just a small group, first, one department, maybe
one office, work out the kinks with them before going live.
Speaker 1 (12:25):
Everywhere okay, minimizes the initial blast radius if something goes wrong.
Speaker 2 (12:29):
Right, And the safest, but also the most expensive and complex, is.
Speaker 1 (12:33):
Parallel operation running both systems at the same time.
Speaker 2 (12:36):
For a period. Yes, you run the old and the
news side by side. Users might even do tasks in both.
If the new one fails, the old one's still there
running perfectly, but you're paying double the operational cost, double
the effort for that safety net.
Speaker 1 (12:50):
So it's a trade off. Risk versus cost.
Speaker 2 (12:52):
Always a trade off. In system deployment, hashtags, tag outro,
and that brings us eventually to Phase five support and secure.
The system is live, but as we talked about with TCO,
it's never really done, is it. The real cost unfolds
over years of ownership.
Speaker 1 (13:08):
Keeping it running, keeping it relevant, which involves different kinds
of maintenance work exactly.
Speaker 2 (13:13):
Their four main types you'll encounter. First is corrective maintenance.
That's just fixing bugs errors that pop up after launch.
Speaker 1 (13:20):
Standard bug fixing.
Speaker 2 (13:21):
Then there's adaptive maintenance. This is crucial adapting the system
because the world changed, maybe a new tax law, a
new regulation, a change in business partners.
Speaker 1 (13:31):
The system has to keep up so it stays compliant
and useful right.
Speaker 2 (13:35):
Third is perfective maintenance. This is about making it better,
more efficient, easier to use, maybe based on user feedback, tweaking.
Speaker 1 (13:42):
It for performance, fine tuning.
Speaker 2 (13:44):
And finally, preventive maintenance. This is proactive changing parts of
the system before they break, maybe updating old code, libraries
or infrastructure to reduce the chance of future failures.
Speaker 1 (13:55):
And that preventive part leads us right to the big
ongoing challenge, doesn't it, especially now with technology moving so
fast agile, the cloud AI is coming.
Speaker 2 (14:06):
It really does. It puts a constant pressure on decision makers.
So the final thought really for you listening, is this
strategic question, how do you balance spending money today on
that proactive preventative maintenance, updating things, simplifying things, knowing it
costs time and resources.
Speaker 1 (14:26):
Now, against the risk of not doing it and facing
potentially much higher maybe even catastrophic it costs down the
road when something big breaks or the whole system needs
replacing because you waited too long.
Speaker 2 (14:37):
That balancing act, weighing today's investment against tomorrow's potential disaster.
That's the core strategic challenge in managing systems over their life.
Speaker 1 (14:46):
A constant calculation. This has been incredibly insightful, a really
clear guide through how these complex systems actually get built
and managed.
Speaker 2 (14:53):
Glad to be here. It's a fascinating feel.
Speaker 1 (14:54):
And for you, our listeners, we hope you feel better
equipped now to understand, maybe question, or even manage the
strategic thinking behind the next big IT project you encounter.
Join us next time on the Deep Dive