All Episodes

October 8, 2025 • 57 mins
Shawn Tierney meets up with Connor Mason of Software Toolbox to learn their company, products, as well as see a demo of their products in action in this episode of The Automation Podcast. For any links related to this episode, check out the "Show Notes" located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 248 Show Notes: Special thanks to Software Toolbox for sponsoring this episode so we could release it "ad free!" To learn about Software Toolbox please checkout the below links: TOP Server Cogent DataHub Industries Case studies Technical blogs Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney with Insights and Automation, and I wanna thank you for tuning back in this week. Now this week on the show, I meet up with Connor Mason from Software Toolbox, who gives us an overview of their product suite, and then he gives us a demo at the end. And even if you're listening, I think you're gonna find the demo interesting because Connor does a great job of talking through what he's doing on the screen. With that said, let's go ahead and jump into this week's episode with Connor Mason from Software Toolbox. I wanna welcome Connor from Software Toolbox to the show. Connor, it's really exciting to have you. It's just a lot of fun talking to your team as we prepared for this, and, I'm really looking forward to because I just know in your company over the years, you guys have so many great solutions that I really just wanna thank you for coming on the show. And before you jump into talking about products and technologies Yeah. Could you first tell us just a little bit about yourself? Connor Mason (Guest): Absolutely. Thanks, Shawn, for having us on. Definitely a pleasure to be a part of this environment. So my name is Connor Mason. Again, I'm with Software Toolbox. We've been around for quite a while. So we'll get into some of that history as well before we get into all the the fun technical things. But, you know, I've worked a lot with the variety of OT and IT projects that are ongoing at this point. I've come up through our support side. It's definitely where we grow a lot of our technical skills. It's a big portion of our company. We'll get that into that a little more. Currently a technical application consultant lead. So like I said, I I help run our support team, help with these large solutions based projects and consultations, to find what's what's best for you guys out there. There's a lot of different things that in our in our industry is new, exciting. It's fast paced. Definitely keeps me busy. My background was actually in data analytics. I did not come through engineering, did not come through the automation, trainings at all. So this is a whole new world for me about five years ago, and I've learned a lot, and I really enjoyed it. So, I really appreciate your time having us on here, Shawn Tierney (Host): Shawn. Well, I appreciate you coming on. I'm looking forward to what you're gonna show us today. I had a the audience should know I had a little preview of what they were gonna show, so I'm looking forward to it. Connor Mason (Guest): Awesome. Well, let's jump right into it then. So like I said, we're here at Software Toolbox, kinda have this ongoing logo and and just word map of connect everything, and that's really where we lie. Some people have called us data plumbers in the past. It's all these different connections where you have something, maybe legacy or something new, you need to get into another system. Well, how do you connect all those different points to it? And, you know, throughout all these projects we worked on, there's always something unique in those different projects.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome back to the automation podcast. My name
is Sean Tierney with Insights and Automation, and
I wanna thank you for tuning back in
this week. Now this week on the show,
I meet up with Connor Mason from Software
Toolbox,
who gives us an overview of their product
suite, and then he gives us a demo
at the end. And even if you're listening,
I think you're gonna find the demo interesting
because Connor does a great job of talking

(00:22):
through what he's doing on the screen.
With that said, let's go ahead and jump
into this week's episode with Connor Mason from
Software Toolbox.
I wanna welcome Connor from Software Toolbox to
the show. Connor, it's really exciting to have
you. It's just a lot of fun talking
to your team as we prepared for this,
and, I'm really looking forward to because I
just know in your company over the years,

(00:42):
you guys have so many great solutions
that I really just wanna thank you for
coming on the show. And before you jump
into talking about products and technologies Yeah. Could
you first tell us just a little bit
about yourself?
Absolutely. Thanks, Sean, for having us on. Definitely
a pleasure to be a part of this
environment. So my name is Connor Mason. Again,
I'm with Software Toolbox.

(01:02):
We've been around for quite a while. So
we'll get into some of that history as
well before we get into all the the
fun technical things. But, you know, I've worked
a lot with the variety of OT and
IT projects that are ongoing at this point.
I've come up through our support side.
It's definitely where we grow a lot of
our technical skills. It's a big portion of
our company. We'll get that into that a
little more. Currently a technical application consultant lead.

(01:24):
So like I said, I I help run
our support team, help with these large solutions
based projects and consultations,
to find what's what's best for you guys
out there. There's a lot of different things
that in our in our industry is new,
exciting. It's fast paced. Definitely keeps me busy.
My background was actually in data analytics. I
did not come through engineering, did not come

(01:45):
through the automation,
trainings at all. So this is a whole
new world for me about five years ago,
and I've learned a lot, and I really
enjoyed it.
So, I really appreciate your time having us
on here,
Sean. Well, I appreciate you coming on. I'm
looking forward to what you're gonna show us
today. I had a the audience should know
I had a little preview of what they
were gonna show, so I'm looking forward to
it. Awesome. Well, let's jump right into it

(02:06):
then. So like I said, we're here at
Software Toolbox,
kinda have this ongoing
logo and and just word map of connect
everything,
and that's really where we lie.
Some people have called us data plumbers in
the past. It's all these different connections where
you have something, maybe legacy or something new,
you need to get into another system. Well,

(02:26):
how do you connect all those different points
to it? And, you know, throughout
all these projects we worked on, there's always
something unique in those different projects. And we
try to work in between those unique areas
and in between all these different integrations
and be something that people can come to
as an expert,
have those high level discussions, find something that
works for them at a cost effective solution.

(02:47):
So outside of just, you know, products that
we offer, we also have a lot of
just knowledge in the industry, and we wanna
share that.
You'll kinda see along here, there are some
product names as well that you might recognize.
Our top server and OmniServer, we'll be talking
about LOPA as well. It's been around in
the industry for, you know, decades at this
point.
And also our symbol factory might be something

(03:07):
you you may have heard in other products,
that they actually utilize themselves for HMI and
and SCADA graphics.
That is that is our product. So you
may have interacted it with us without even
knowing it, and I hope we get to
kind of talk more about things that we
do.
So before we jump into all the fun
technical things as well, I kind of want
to talk about just the overall software toolbox

(03:28):
experience as we call it.
We're we're more than just someone that wants
to sell you a product. We we really
do work with, the idea of solutions.
How do we provide you value and solve
the problems that you are facing
as the person that's actually working out there
on the field,
on those operation lines,
and making things as well. And that's really

(03:49):
our big priority
is providing a high level of knowledge,
variety of the things we can work with,
and then also the support.
It's very dear to me coming through the
the support team is still working, you know,
day to day throughout that software toolbox,
and it's something that has been ingrained into
our heritage.
Next year will be thirty years of software

(04:10):
toolbox in 2026.
So we're established in 1996.
Through those thirty years, we have committed to
supporting the people that we work with. And
I I I can just tell you that
that entire motto lives throughout everyone that's here.
So from that, over 97%
of the customers that we interact with through
support say they had an awesome or great

(04:32):
experience.
Having someone that you can call that understands
the products you're working with, understands the environment
you're working in, understands the priority of certain
things. If you ever have a plant shut
down, we know how stressful that is. Those
are things that we work through and help
people throughout. So
this really is the core pillars of Software
Toolbox and who we are,

(04:53):
beyond just the products, and and I really
think this is something unique
that we have continued to grow
and stand upon for those thirty years.
So jumping right into
some of the industry challenges we've been seeing
over the past few years.
This is also a fun one for me,
talking about data analytics and tying these things

(05:15):
together.
In my prior life and education,
I worked with just tons of data, and
I never fully knew where it might have
come from, why it was such a mess,
who structured it that way, but it's my
job to get some insights out of that.
And knowing
what the data actually was and why it
matters is a big part of actually getting

(05:36):
value. So if you have dirty data, if
you have data that's just clustered, it's in
silos,
it's very often you're not gonna get much
value out of it.
This was a study that we found in
2024,
from Garner Research, And it said that,
based on the question
that business were asked, were there any top

(05:57):
strategic priorities
for your data analytics
functions in 2024?
And almost 50%,
it's right at '49,
said that they wanted to improve data quality,
and that was a strategic priority.
This is about half the industry is just
talking about data quality, and it's exactly because
of those reasons I said in my prior
life gave me a headache, to look at

(06:18):
all these different things that
I don't even know where they became from
or or why they were so different. And
the person that made that may have been
gone may not have the contacts, and making
that from the person that implemented things to
the people that are making decisions,
is a very big task sometimes. So if
we can create a better pipeline of data
quality at the beginning,

(06:39):
makes those people's lives a lot easier up
front
and allows them to get value out of
that data a lot quicker. And that's what
businesses need. You know, I wanna
just data quality. Right? Mhmm. I think a
lot of us, when we think of that,
we think of,
you know, error error detection. We think of
lost connections. We think of,

(07:01):
you know, just garbage data coming through. But
I I think from an analytical side, there's
a different view on that, you know, in
line with what you were just saying.
So how do you when you're talking to
somebody about data quality, how do you get
them to shift gears and focus in on
what you're talking about and not like a
quality connection to the device itself?

(07:21):
Absolutely. Yeah. We I kinda live in both
those worlds now. You know, I I get
to see that that connection state. And when
you're operating in real time, that quality is
also very important to you. Mhmm. And I
kind of use that at the same realm.
Think of that when you're thinking in real
time, if you know what's going on in
the operation and where things are running, that's
important to you. That's the quality that you're
looking for. You have to think beyond just

(07:41):
real time. We're talking about historical data. We're
talking about data that's been stored for months
and years. Think about the quality of that
data once it's made up to that level.
Are they gonna understand what was happening around
those periods? Are they gonna understand what those
tags even are? Are they gonna understand
what those conventions that you've implemented,
to give them insights into this operation. Is
that a clear picture? So, yeah, you're absolutely

(08:03):
right. There are two levels to this, and
and that is a big part of it.
The the real time data and historical, and
we're gonna get some of that into into
our demo as well.
It it's a it's a big area for
the business,
and the people working in the operations.
Yeah. I think quality too.
Think,
you know, you may have data. It's good
data. It was collected correctly. You had a

(08:24):
good connection to the device. You got it.
You got it as often as you want.
But that data could really be useless. It
could tell you nothing. Right. Exactly. Right? It
could be a flow rate on part of
the process that irrelevant to
monitoring the actual production of the product or
or whatever you're making. And,
you know, I've known a lot of people
who filled up their databases, their historians,

(08:47):
with they just they just logged everything. And
it's like a lot of that data was
what I would call low quality because it's
low information value. Right?
Absolutely. I'm sure you run into that too.
Yeah. We we run into a lot of
people that, you know, I've got x amount
of data points in my historian and, you
know, then we start digging into, well, I
wanna do something with it or wanna migrate.

(09:08):
Okay. Like, well, what do you wanna achieve
at the end of this? Right? And and
asking those questions, you know, it's great that
you have all these things historized.
Are you using it? Do you have the
right things historized?
Are they even set up to be, you
know, worked upon once they are historized by
someone outside of this this landscape?
And I think OT plays such a big
role in this, and that's why we start

(09:29):
to see the convergence of the IT and
OT teams just because that communication
needs to occur sooner. So we're not just
passing along, you know, low quality data, bad
quality data as well.
And we'll get into some of that later
on.
So to jump into some of our products
and solutions, I kinda wanna give this overview
of the automation pyramid. This is where we

(09:50):
work from things like the field device communications.
And you you have certain sensors, meters,
actuators along the actual lines,
wherever you're working. We work across all the
industries, so this can vary between those.
Through there, you work up kind of your
control area. A lot of control engineers are
working.
This is where I think a lot of
the audience is very familiar with PLCs. Your

(10:11):
your typical name, Siemens, Rockwell,
your Schneiders that are creating,
these hardware products. They're interacting with things on
the operation level, and they're generating data.
That that was kind of our bread and
butter for a very long time and still
is that communication level of getting data from
there, but now getting it up the stack
further into the pyramid

(10:32):
of your supervisory,
MES connections,
and it'll also now open to these ERP.
We have a lot of large corporations that
have data across
variety of different solutions and also want to
integrate directly down into their operation levels.
There's a lot of value to doing that,
but there's also a lot of watch outs,
and a lot of security concerns. So that'll

(10:52):
be a topic that we'll be getting into.
We also all know that the cloud is
here. It's been here, and it's it's gonna
continue to
push its way into,
these cloud providers into OT as well.
There there's a lot of benefit to it,
but there there's also some watch outs as
this kind of realm,
changes

(11:13):
in the landscape that we've been used to.
So there's a lot of times that we
wanna get data out there. There's value into
AI agents. It's a hot it's a hot
commodity right now.
Analytics as well. How do we get those
things directly from shop floor,
up into the cloud directly, and how do
we do that securely?
It's things that we've been working on. We've
had successful projects,

(11:34):
continues to be an interest area and I
don't see it slowing down at all.
Now, when we kind of begin this level
at the bottom of connectivity,
people mostly know us for our top server.
This is our platform for industrial device connectivity.
It's a thing that's talking to all those
different PLCs in your plant, whether that's brownfield

(11:55):
or greenfield.
We pretty much know that there's never gonna
be a plant that's a single PLC manufacturer,
that exists in one plant. There's always gonna
be something that's slightly different.
Definitely from Brownfield, things different engineers made different
choices,
things have been eminent, and you gotta keep
running them.
TopServe provides this single platform to connect to

(12:15):
a long laundry list of different PLCs.
And if this sounds very familiar to Kepserver,
well, you're not wrong.
Kepserver is the same exact technology that TopServer
is. What's the difference then is probably the
biggest question we usually get.
The difference technology wise is nothing.
The difference in the back end is that

(12:37):
actually
it's all the same product, same product releases,
same price,
but we have been the biggest single
source of Kepserver or Topsyra implementation into the
market,
for almost two plus decades at this point.
So the single biggest
purchase that we own this own labeled version
of Kepserver to provide to our customers. They

(12:58):
interact with our support team, our solutions teams
as well, and we sell it along the
stack of other things because it it fits
so well. And we've been doing this since
the early two thousands when, Kepware was a
a much smaller company than it is now,
and we've had a really great relationship with
them. So if you've enjoyed the technology of
of Kepserver,
maybe there's some users out there. If you

(13:18):
ever heard of TopServer and that has been
unclear, I hope this clear clarifies it.
But it it is a great technology stack
that that we build upon and we'll get
into some of that in our demo.
Now the other question is, what if you
don't have a standard communication protocol,
like a modbus,
like an Allen Bradley PLC as well? We
see this a lot with, you know, testing

(13:39):
areas,
pharmaceuticals,
maybe also in packaging, barcode scanners,
weigh scales,
printers online as well.
They they may have some form of basic
communications that talks over just TCP or or
serial.
And how do you get that information that's
really valuable still, but it's not going through
a PLC. It's not going into your typical

(14:01):
agent mind SCADA.
It might be very manual process for a
lot of these test systems as well, how
they're collecting and analyzing the data.
Well, you may have heard of our Arm
server as well. It's been around, like I
said, for a couple decades and just a
proven solution that
without coding, you can go in and build
a custom protocol
that expects a format
from that device,

(14:22):
translates it, puts it into standard tags, and
now that those tags can be accessible through
the open standards of OPC,
or to it was a a Veeva user
suite link as well.
And that really provides a nice combination
of your standard communications
and also these more custom communications may have
been done through scripting in the past. Well,

(14:44):
you know, put this onto,
an actual server that can communicate through those
protocols natively,
and just get that data into those SCADA
systems, HMIs,
where you need it.
You know, I used that. Many years ago,
I had an integrator
who came to me. He's like, Sean, I
wanna this is back in the RSVUE days.
He's like, Sean, I I got, like, 20

(15:05):
Euotherm devices on a four eighty five, and
they speak ASCII, and I gotta I gotta
get into RSVUE 32. And,
you know, OmniSIR, I love that you could
you could basically developing and we did Omega
and some other devices too. You're developing your
own protocol, but it's beautiful.
And and the fact that when you're testing
it, it color codes

(15:26):
everything.
So you know, hey. That part worked. The
header worked. The data worked. Oh, the trailing
didn't work, or the terminated didn't work, or
the data's not in the right format. Or
I just it was a joy to work
with back then, and I can imagine it's
only gotten better since.
Yeah. I think it's like a little engineer
playground where you get in there. It started
really decoding and seeing how these devices communicate.

(15:47):
And then once you've got it running, it
it's one of those things that it it
just performs and, is saved by many people
from developing custom code, having to
manage that custom code and integrations,
you know, for for many years. So it
it's one of those things that's kinda tried,
tested, and, it it's kind of a staple
still our our base level communications.

(16:09):
Alright. So moving along kind of our automation
pyramid as well.
Another part of our large offering is the
Cogent data hub. Some people may have heard
from this as well. It's been around for
a good while. It's been part of our
portfolio for for a while as well. This
starts building upon where we had the communication
now up to those higher echelons of the
pyramid.

(16:30):
This is gonna bring in a lot of
different connectivities. You if you're not if you're
listening,
it it's kind of this cog and spoke
type of concept for real time data. We
also have historical implementations.
You can connect through a variety of different
things. OPC, both the profiles for alarms and
events, and even OPC UA's alarming conditions, which

(16:51):
is still getting adoption across the, across the
industry, but it is growing.
As part of the OPC UA standard, we
have integrations to MQTT.
It can be its own MQTT
broker, and it can also be an MQTT
client.
That has grown a lot. It's one of
those things that lives be besides OPC UA,
not exactly a replacement.

(17:13):
If you ever have any questions about that,
it's definitely a topic I love to talk
about.
There's space for for this to combine the
benefits of both of these, and it's so
versatile and flexible
for these different type of implementations.
On top of that,
it it's it's a really strong tool for
conversion and aggregation.

(17:33):
You kind of add this, like, its name
says, it's a it's a data hub. You
send all the different information to this. It
stores it into,
a hierarchy with
a variety of different modeling that you can
do within it. That's gonna store these values
across a standard data format.
Once I had data into this, any of
those different connections, I can then send data

(17:54):
back out.
So if I have anything that I know
is coming in through a certain plug in
like OPC,
bring that in, send it out to on
these other ones, OPC,
DA over to MQTT.
It could even do DDA if I'm still
using that, which I probably wouldn't suggest. But
overall,
there's a lot of good benefits from having
something that can also be a standardization,

(18:17):
between all your different connections. I have a
lot of different things, maybe variety of OPC
servers,
legacy or newer.
Bring that into a data hub, and then
all your other connections, your historians,
your MAS, your SCADAs, it can connect to
that single point. So it's all getting the
same data model
and values from a single source rather than

(18:39):
going out and making many to many connections.
A a large thing that it was originally,
used for was getting around DCOM.
That word is, you know, it might send
some shivers down people's spines still, to this
day, but it's it's not a fun thing
to deal with DCOM and also with the
security hardening.

(19:00):
It's just not something that you really want
to do.
I'm sure there's a lot of security
professionals would advise against EPRA doing it.
This tunneling will allow you to have a
data hub that locally talks to any of
the DA server client,
communicate between two data hubs over a tunnel
that pushes the data just over TCP, takes
away all the comm wrappers,

(19:20):
and now you just have values that get
streamed in between. Now you don't have to
configure any DCOM at all, and it's all
local.
So a lot of people went transitioning,
between products where maybe the server only supports
OPC DA, and then the client is now
supporting OPC UA. They can't change it yet.
This has allowed them to implement a solution

(19:41):
quickly and cost and at a cost effective
price, without ripping everything out.
You know, I wanna ask you too. I
can see because this thing is it's a
data hub. So if you're watching and you're
if you're listening and not watching,
you you're not gonna see, you know, server,
client, UAD, a broker, server, client.
You know, just all these different things up
here on the site. Do you what how

(20:02):
does somebody find out if it does what
they need? I mean, do you guys have
a line they can call to say, I
wanna do this to this. Is that something
Data Hub can do, or is there a
demo? What would you recommend to somebody?
Absolutely.
Reach out to us. We we have a
a lot of content outline, and it's not
behind any paywall or sign in links even.
You you can always go to our website.

(20:23):
It's just softwaretoolbox.com.
Mhmm.
And that's gonna get you to our product
pages. You can download any product directly from
there. They have demo timers. So typically with,
with coaching data hub, after an hour, it
will stop. You can just rerun it. And
then call our team. Yeah. We have a
solutions team that can work with you on,
hey. What do I need as well? Then

(20:44):
our support team, if you run into any
issues, can help you troubleshoot that as well.
So,
I'll have some contact information at the end,
that'll get some people to, you know, where
they need to go. But you're absolutely right,
Sean.
Because this is so versatile, everyone's use case
of it is usually something a little bit
different. And the best people to come talk
to that is us because we've we've seen
all those differences. So

(21:05):
I think a lot of people run into
the fact, like, they have a problem. Maybe
it's the one you said where they have
the OPC UA and it needs to connect
to an OPC DA client. And, you know,
and a lot of times, they're they're a
little gunshot to buy a license because they
wanna make sure it's gonna do exactly what
they need first. And I think that's where
having your people can, you know, answer their

(21:26):
questions saying, yes. We can do that or,
no. We can't do that.
Or, you know, a a demo that they
could download and run for an hour at
a time to actually do a proof of
concept for the boss who's gonna sign off
on purchasing this. And then the other thing
is too, a lot of products like this
have options. And you wanna make sure you're
buying the ticking the right boxes when you
buy your license because you don't wanna buy

(21:47):
something you're not gonna use. You wanna buy
the exact pieces you need. So I highly
recommend I mean, this product just does like,
I have, in my mind, like, five things
I wanna ask right now, but not gonna.
But, yeah, def definitely, when it when it
comes to a product like this, great to
touch base with these folks. They're super friendly
and helpful, and, they'll they'll put you in
the right direction.
Yeah. I I can tell you that's working

(22:08):
someone to support.
Selling someone a solution that doesn't work is
not something I've been doing. Bad day. Right.
Exactly. Yeah. And we work very closely,
between anyone that's looking at products.
You know, me being as technical product managers,
well, I I'm engaged in those conversations. And
Mhmm. Yeah. If you need a demo license,
reach out to us to extend that. We
wanna make sure that you are buying something

(22:29):
that provides you value.
Now kind of moving on into a similar
realm. This is one of our still somewhat
newer offerings, I say, but we've been around
five five plus years, and it's really grown.
And I kinda said here, it's called OPC
router,
and and it's not it's not a networking
tool. A lot of people may may kinda
get that. It's more of a, kind of
a term about,

(22:50):
again, all these different type of connections. How
do you route them to different ways? It
it kind of it it separates itself from
the Cogent data hub, and and acting at
this base level of being like a visual
workflow
that you can assign
various
tasks to. So if I have certain events
that occur, I may wanna do some processing
on that before I just send data along,

(23:12):
where the data hub is really working in
between converting,
streaming data, real time
connections.
This gives you a a kind of a
playground to work around of if I have
certain
tasks that are occurring, maybe through a database
that I wanna trigger off of a certain
value,
based on my SCADA system,
well, you can build that in in these

(23:33):
different workflows to execute exactly what you need.
Very, very flexible.
Again, it has all these different type of
connections.
The very unique ones that have also grown
into kind of that OT IT convergence,
is it can be a REST API
server and client as well. So I can
be sending out requests to,

(23:54):
RESTful servers where we're seeing that hosted in
a lot of new applications. I wanna get
data out of them. Or once I have
consumed
a variety of data, I can become the
REST server in OPC router and offer that
to other applications to request data from itself.
So, again, it can kind of be that
centralized area of information.
The other thing as we talked about in
the automation pyramid is it has connections directly

(24:17):
into SAP and ERP systems.
So if you have work orders, if you
have materials,
that you wanna continue
to track and maybe trigger things based off
information from your your operation floors via PLCs
tracking,
how they're using things along the line, and
that needs to match up with what the
SAP system has for,
the amount of materials you have. This can

(24:39):
be that bridge.
It's really is built off the mindset of
the OT world as well. So we kinda
say this helps empower the OT level because
we're now giving them the tools to that
they understand what what's occurring in their operations.
And what could you do by having a
tool like this to allow you to kind
of create automated workflows

(24:59):
based off certain values and certain events
and automate some of these things that you
may be doing manually or doing very convoluted
through a variety of solutions.
So
this is one of those prod, products as
well that's very advanced in the things that
supports.
Linux and Docker containers is, is definitely could
be a hot topic, rightly fleet rightfully so.

(25:22):
And this can run on a on a
Docker container deployed as well. So we we've
seen that with the I IT folks that
really enjoy being able to control and to
higher deployment,
allows you to update easily, allows you to
control and spin up new containers as well.
This gives you a lot of flexibility
to to deploy and manage these systems.

(25:43):
You know, I may wanna have you back
on to talk about this. I used to
there's an old product called Rascal
that I used to use. It was a
transaction manager, and it would
based on data changing or on a time
that as a trigger, it could take data
either from the PLC to the database or
from the database to the PLC,
and it would work with stored procedures. And

(26:05):
and this seems like it hits all those
points,
And it sounds like it's a visual like
you said, right there on the slide, visual
workflow builder. Yep. So you really piqued my
interest with this one, and and it may
be something we wanna come back to and
and revisit in the future,
because, it just it's just I know that
that older product was very useful
and,

(26:25):
you know, it really solved a lot of
old applications back in the day.
Yeah. Absolutely. And this this just takes that
on and builds even more.
If you if anyone was, kind of listening
at the beginning of this year or two,
a conference called Prove It that was very
big in the industry,
we were there to and we presented on
stage a solution that we had. Highly recommend
going searching for that. It's on our web

(26:46):
pages. It's also on their YouTube links, and
it's it's called Prove
It. And
OPC router was a big part of that
in the back end.
I would love to dive in and show
you the really unique things. Kind of as
a quick overview, we're able to use Google
AI vision to take camera data and detect
if someone was wearing a hard hat. All
that logic and behind of getting that information

(27:07):
to Google AI vision, was through REST with
OPC router. Then we were parsing that information
back through that,
connection and then providing it back to the
PLCs. So we go all the way from
a camera to a PLC controlling a light
stack,
up to Google AI vision through OPC router,
all on hotel Wi Fi.
It's very imp it's very, very fun presentation,

(27:29):
and, our I think our team did a
really great job.
So a a a pretty new offering I
have I wanna highlight,
is our is our data caster.
This is a an actual piece of hardware.
You know, our software toolbox is we we
do have some hardware as well. It's just,
part of the nature of this environment of
how we mesh in between things. But the

(27:50):
the idea is that,
there's a lot of different use cases for
HMI and SCADA. They have grown so much
from what they used to be, and they're
very core part of the automation stack.
Now a lot of times, these are doing
so many things beyond that as well. What
we found is that in different areas of
operations, you may not need all that different
control.

(28:10):
You may not even have the space to
make up a whole workstation for that as
well.
What this does, the data caster, is, just
simply plug it plugs it into any network
and into an HDMI compatible display, and it
gives you a very easy configure workplace to
put a few key metrics onto a screen.
So if I have different things from you

(28:30):
can connect directly to PLCs like Allen Bradley.
You can connect to SQL databases. You can
also connect to rest APIs to gather the
data from these different sources
and build a a a kind of easy
to to view,
KPI dashboard in a way. So if you're
on a operation line and you wanna look
at your current run rate, maybe you have
certain things in the POC tags,

(28:50):
you know, flow and pressure that's very important
for those operators to see. They may not
be, even the capacity to be interacting with
anything. They just need visualizations
of what's going on.
This product can just be installed,
you know, industrial areas with, with any type
of display that you can easily access and
and give them something that they can easily

(29:11):
look at.
It's configured all through a web browser to
display what you want. You can put on
different colors based on levels of values as
well.
And it's just I feel like a very
simple thing that sometimes it seems so simple,
but those might be the things that provide
value on the actual operation floor.
This is, for anyone that's watching, kind of
a quick view of a very simple screen.

(29:33):
What we're showing here is what it would
look like from all the different data sources.
So
talking directly to ControlLogs PLC, talking to SQL
databases,
micro eight eight hundreds,
an arrest client, and and what's coming very
soon, definitely by the end of this year,
is OPC UA support.
So any OPC UA server that's out there
that's already having your PLC data or etcetera,

(29:56):
this could also connect to that and get
values from there.
Can I can you make it I'm I'm
here I go? Can you make it so
it, like, changes, like, pages every few seconds?
Right now, it is a single page, but
this is, like I said, very new product,
so we're taking any feedback. If, yeah, if
there's this type of slideshow cycle that would
be,

(30:16):
you know, valuable to anyone out there, let
us know. We're definitely always interested to see
the people that are actually working out at
these operation sites, what what's valuable to them.
Yeah. A lot of kiosks you see when
when you're traveling, it'll say, like, line one
well, I'll just throw out there. Line one,
and that'll be on there for five seconds,
and then it'll go line two. That'll be
on there for five seconds, and then line

(30:36):
you know, I and that's why I just
mentioned that because I can see that being
a question
that, that that I would get from somebody
who is asking me about it. Oh, great
question. Appreciate it.
Alright. So
now we're gonna set time for a little
hands on demo. For anyone that's just listening,
we're gonna I'm gonna talk about this at
at a high level and walk through everything.
But

(30:56):
the idea is that,
we have a few different POCs,
very common in Allen Bradley and just a
a Siemens seven,
s seven fifteen hundred that's in our office,
pretty close to me on the other side
of the wall wall, actually.
We're gonna first start by connecting that to
our top server like we talked about. This
is our industrial
communication server, that offers both OCDA, OC UA,

(31:19):
SweetLink connectivity as well. And then we're gonna
bring this into our Cogent data hub.
This we talked about is getting those values
up to these higher levels. What we'll be
doing is also tunneling the data. We talked
about being able to share data through the
data hubs themselves.
Kinda explain why we're doing that here and
the value you can add. And then we're

(31:39):
also gonna showcase
adding on MQTT
to this level.
Taking beta now just from these two PLCs
that are sitting on a rack, and I
can automatically make all that information
available in the MQTT broker. So any MQTT
client that's out there that wants to subscribe
to that data,
now has that accessible. And I've created this
all through a a really simple workflow.

(32:01):
We also have some databases connected.
Influx,
we install with Code and DataHub, has a
free visualization tool that kinda just helps you
see what's going on in your processes. I
wanna showcase a little bit of that as
well.
Alright. So now jumping into our demo, when
we first start off here is the our
top server.
Like I mentioned before, if anyone has worked
with KEP server in the past, this is
gonna look very similar.

(32:23):
Like it because it is. The same technology
and all the things here.
The the first things that I wanted to
establish in our demo,
was our connection to our POCs.
I have a few here. We're only gonna
use the Allen Bradley and the Siemens, for
the the time that we have on our
demo here. But how this builds out as
a platform
is you create these different channels and the

(32:44):
devices connections between them. This is gonna be
your your physical connections to them. It's either,
IP TCPIP
connection or maybe your serial connection as well.
We have support for all of them. It
really is a long list.
Anyone watching out there,
you can kind of see all the different
drivers that that we offer.
So allowing this into a single platform,

(33:06):
you can have all your connectivity
based here.
All those different connections that you now have
that up the stack, your SCADA, your historians,
MAS even as well, they can all go
to a single source.
Makes that management,
troubleshooting,
all those a bit easier as well.
So one of the first things I did
here, I have this built out, but I'll

(33:27):
kinda walk through what you would typically do.
You have your Allen Bradley ControlLogix Ethernet driver
here first. You know, I have
some IPs in here I won't show, but,
regardless, we have our our our drivers here,
and then we have a set of tags.
These are all the global tags in the
programming of the PLC. How I got these

(33:47):
to to kind of map automatically is in
our in our driver, we're able to create
tags automatically. So you're able to send a
command to that device and ask for its
entire tag database.
They can come back, provide all that, map
it out for you, create those tags as
well.
This saves a lot of time from, you
know, an engineer have to go in and,
addressing
all the individual items themselves. So once it's

(34:10):
defined in the program project, you're able to
bring this all in automatically.
I'll show now how easy that makes it
connecting to something like the Cogent data hub.
In a very similar fashion, we have a
connection over here to the Siemens,
PLC that I also have. You can see
beneath it all these different tag structures,
and this was created the exact same way.
While those those PLC support it, you can

(34:32):
do an automatic tag generation, bring in all
the structure that you've already built out your
PLC programming,
and and make this available on this OPC
server now as well. So
that's really the basis. We first need to
establish communications
to these PLCs, get that tag data, and
now what do we wanna do with it?
So in this demo, what I wanted to
bring up was,

(34:52):
the code in DataHub next. So
here, I see a very similar kind of
layout. We have a different set set of
plugins on the left side.
So for anyone listening, the Cogent Data Hub
again is kind of our
aggregation and conversion tool.
All these different type of protocols like OPC
UA,

(35:13):
OPC DA,
and OPC A and E for alarms and
events. We also support OPC alarms and conditions,
which is the newer profile for alarms in
OPC UA.
We have all a variety of different ways
that you can get data out of things
and data's into the data hub.
We can also do bridging.

(35:34):
This concept is, how you share data in
between
different points. So let's say I had a
connection to one OPC server,
and it was communicating to a certain PLC,
and there were certain registers I was getting
data from. Well, now I also wanna connect
to a different OPC server that has,
entirely different brand of PLCs.
And then maybe I wanna share data in

(35:54):
between them directly. Well, with this software, I
can just bridge those points between them. Once
they're in the data hub, I can do
kind of whatever I want with them. I
can then allow them to write between those
PLCs and share data that way, and you're
not now having to do any type of
hardwiring directly in between them, and then I'm
compatible to communicate to each other. Through the
standards of OPC and these variety of different

(36:15):
communication levels, I can integrate them together.
You know, you bring up a good point.
When you do something like that, is there
any heartbeat? Like, is there on the general
or under under,
one of these, topics? Is there are there
tags we can use that are from DataHub
itself
that can be sent to the destination, like

(36:36):
a heartbeat or, you know,
the merge transactions?
Or
Yeah. Absolutely.
So with this as well, there's pretty strong
scripting engine, and I have done that in
the past where you can make internal tags.
And that that could be a a timer.
It could be a counter.
And and just kind of allows you to
create your own tags as well that you

(36:57):
could do the same thing, could share that,
through bridge connection to a PLC. So, yeah,
there there are definitely some people that had
those cert and, you know, use cases where
they wanna get something to just track,
on this software side and get it out
to those hardware PLCs.
Absolutely. I mean, when you send out the
data out of the PLC, the PLC doesn't
care to take my data. But when you're
getting data into the PLC,

(37:18):
you wanna make sure it's updating and it's
fresh. And so, you know, they throw a
counter in there, the script thing, and be
able to have that. As as long as
you see that incrementing, you know, you got
good data coming in. That's that's a good
feature.
Absolutely.
You know, another big one is the the
redundancy.
So what this does is beyond just the
OPC, we can make redundancy to basically anything

(37:39):
that has two things running of it. So
any of these different connections.
How it's unique is what it does is
it just looks at the buckets of data
that you create. So for an example,
if I do have two different OPC servers
and I put them into two areas of,
let's say, OPC server one and OPC server
two,
I can what now create an OPC redundancy

(38:02):
data bucket. And now any client that connects
externally to that and wants that data, it's
gonna go talk to that bucket of data.
And that bucket of data is going to
automatically change in between sources as things go
down, things come back up, and the client
would never know what's hap what that happened
unless you wanted to. There are internal tasks
to show what's the current source and things,
but the idea is to make this trans

(38:24):
kind of hidden that
regardless of what's going on in the operations,
if I have this set up,
I can have my external applications
just reading from a single source without knowing
that there's two things behind it that are
actually controlling that.
Very important for, you know, historian connections where
you wanna have a full complete picture of
that data that's coming in. If you're able

(38:45):
to make a redundant connection to two different,
servers and then allow that historian to talk
to a single point where it doesn't have
to control that switching back and forth.
It it will just see that data flow
streamlessly
as as either one is up at that
time.
Kinda beyond that as well, there's quite a
few other different things in here. I don't
think we have time to cover all of
them. But for for our demo, what I

(39:07):
wanna focus on first is our OPC UA
connection.
This allows us both to act as a
OPC UA client to get data from any
servers out there, like our top server.
And also we can act as an OPC
UA server itself. So if anything's coming in
from maybe you have multiple connections to different
servers,
multiple connections to other things that aren't OPC

(39:29):
as well, I can now provide all this
data automatically in my own namespace
to allow things to connect to me as
well. And that's part of that aggregation feature,
and kind of topic I was mentioning before.
So with that, I have a connection here.
It's pulling data all from my top server.
I have a few different tags from my
Alec Bradley and and my Siemens PLC selected.
The next part of this, while I was

(39:50):
meshing, was the tunneling.
Like I said, this is very popular to
get around DCOM issues,
but there's a lot of reasons why you
still may use this beyond just the headache
of DCOM and what it was.
What this runs on is a a TCP
stream that takes all the data points as
a value, a quality, and a timestamp,
and it can mirror those in between another

(40:12):
DataHub instance. So if I wanna get things
across a network, like my OT side, where
NASH
previously, I would have to come in and
allow
a,
open port onto my network for any OPC
UA clients,
across the network to access that, I can
now actually change the direction of this and

(40:32):
allow me to tunnel data out of my
network without opening up any ports. This is
really big for security.
If anyone out there, security professional or working
as an engineer, you have to work with
your IT and security a lot, they don't
you don't wanna have an open port, especially
to your operations and OT side.
So this allows you to change that direction
of flow and push data out of this

(40:53):
direction into another area like a DMZ computer
or up to a business level computer as
well.
The other things as well that I have
configured in this demo,
the benefit of having that tunneling streaming data
across this connection
is I can also store this data locally
in a, influx database.
The purpose of that then is that I

(41:14):
can actually historize this,
provide then if this connection ever goes down
to backfill
any information
that was lost during that tunnel connection going
down.
So with this added layer on and real
time data scenarios like OPC UA,
unless you have historical access, you would lose
a lot of data if that connection ever
went down. But with this, I can actually

(41:37):
use the back end of this InfluxDB,
buffer any values.
When my connection comes back up,
pass them along that stream again. And if
I have anything that's historically connected, like,
another InfluxDB,
maybe a PI historian,
Vue historian,
any historian offering out there that can allow
that connection. I can then provide all those

(41:59):
records that were originally missed and backfill that
into those systems.
So I switched over to a second machine.
It's gonna look very similar here as well.
This also has an instance of the Cogent
Data Hub running here. For anyone not watching,
what we've actually have on this side is
the the portion of the tunneler that's sitting
here and listening for any data requests coming

(42:20):
in. So on my first machine, I was
able to connect my PLCs,
gather that information into Cogent DataHub, and now
I'm pushing that information,
across the network into a separate machine that's
sitting here and listening to gather information.
So what I can quickly do is just
make sure I have all my data here.
So I have these different points,
both from my Allen Bradley PLCs. I have

(42:42):
a few,
different simulation
demo points, like temperature,
pressure,
tank level, a few statuses, and all this
is updating
directly through that stream as the PLC is
updating it as well.
I also have my scenes controller. I have
some,
current values
and a few different counters tags as well.

(43:03):
All of this again is being
directly streamed through that tunnel. I'm not connecting
to an OPC server at all on this
side.
I can show you that here. There's no
connections configured. I'm not talking to the PLCs
directly on this machine as well. But maybe
we'll pass all the information through without opening
up any ports on my OT demo machine
per se.

(43:23):
So what's the benefit of that? Well, again,
security.
Also, the ability to do the store and
forward mechanisms.
On the other side, I was logging directly
to a InfluxDB.
This could be my d- my buffer,
and then I was able to
configure it where if any values were lost,
to store that across the network.

(43:45):
So now with this side, if I pull
up Chronic Graph, which is a free visualization
tool that installs with the DataHub as well,
I can see some very nice,
visual workflows
and and visual diagrams of
what is going on with this data. So
I have a pressure that
is just a simulator in this, Allen Bradley

(44:08):
PLC that ramps up and and comes back
down. It's not actually connected to anything that's
reading a real pressure, but you can see
over time, I can kind of change through
these different layers of time.
And I might go back a little far,
but I have a lot of data that's
been stored in here. For a while during
my test, I turned this off and,

(44:28):
made it fail, but then I came back
in and I was able to recreate all
the data and backfill it as well. So
through through these views,
I can see that
as data disconnects,
as it comes back on, I have a
very cyclical
view of the data because it was able
to recover and store and forward from that
source.
Like I said, Sean, data quality

(44:50):
is a big thing in this industry. It's
a big thing for people both at the
operations side,
and both people making decision in the business
layer. So being able to have a full
picture,
without gaps, it is definitely something that,
you should be prioritizing,
when you can.
Now what we're seeing here is you're using
InfluxDB
on this, destination PC

(45:11):
or IT side PC and chronograph, which was
that utility or that package that comes, gets
installed. It's free.
But you don't actually have to use that.
You could have sent this in to an
OSI pi or Exactly. Somebody else's historian. Right?
Can you name some of the historians you
work with? I know OSI pie.
Yeah. Yeah. Absolutely.

(45:32):
So there's quite a few different ones. As
far as what we support in the Data
Hub natively,
Amazon Kinesis, the cloud hosted historian that we
can also do the same things from here
as well. Aviva Historian, Aviva Insight,
Apache Kafka.
This is a a kind of a a
newer one as well that used to be
a very IT oriented solution, now getting into
OT.

(45:53):
It's kind of a similar
database structure where things are stored in different
topics that we can stream to.
On top of that, just regular old ODBC
connections.
That opens up a lot of different ways
you can do it, or even, the old
classic OPC,
HDA. So if you have any,
historians that that can act as an OPC

(46:15):
HDA,
connection, we we can also stream it through
there. Excellent. That's a great list.
The other thing I wanna show while we
still have some time here is that MQTT
component.
This is really growing and, it's gonna continue
to be a part of the industrial automation
technology stack and conversations moving forward,
for streaming data, you know, from devices, edge

(46:38):
devices,
up into different layers,
both now into the OT, and then maybe
out to, IT,
in our business levels as well, and definitely
into the cloud as we're seeing a lot
of growth into it.
Like I mentioned with Data Hub, the big
benefit
is I have all these different connections. I
can consume all this data. Well, I can
also act as an MQTT broker. And what

(46:59):
what a broker typically does in MQTT is
just route data and share data. It's kind
of that central point where things come to
it to either say, hey. I'm giving you
some new values. Share it with someone else.
Or, hey. I need these values. Can you
give me that?
It really fits in super well with what
this product is at its core.
So all I have to do here is
just enable it.

(47:20):
What that now allows is I have an
example,
MQTT Explorer. If anyone has worked with MQTT,
you're probably familiar with this.
There's nothing else I configured beyond just enabling
the broker. And you can see within this
structure,
I have all the same data that was
in my Data Hub already. The same things
I were collecting from my PLCs and top
server.

(47:40):
Now I've embedded these as
MPPT points and now I have them in
JSON format with the value, their timestamp. You
can even see, like, a little trend here
kind of matching what we saw in Influx.
And and now this enables all those different
cloud connectors that wanna speak this language to
do it seamlessly.
So you didn't have to set up the
PLCs a second time to do this? Nope.

(48:01):
Not at all. You just enabled this, and
now the data's going this way as well.
Exactly.
Yeah. That's a really strong point of the
Cogent Data Hub is once you have everything
into its structure and model,
you just enable it to use any of
these different connections.
You can get really, really creative with these
different things. Like we talked about with the

(48:21):
the bridging aspect and getting into different systems,
even writing down the PLCs.
You can make crust, custom notifications
and email alerts,
based on any of these values.
You could even take something like this MTT
connection,
tunnel it across to another data hub as
well, maybe then convert it to OPC DA.
And now you've made a a

(48:42):
a new connection over to something that's very
legacy as well.
Yeah. That, I mean, the options here are
just pretty amazing, all the different things that
can be done.
Absolutely.
Well, I, you know, I wanna jump back
into some of our presentation here while we
still got the time. And now after we're
kinda done with our demo, there's so many
different ways that you can use these different

(49:04):
tools. This is just a really simple,
kind of view of the, something that used
to be very
simple, just connecting OpenSea servers to a variety
of different connections,
kind of expanding onto with that that's store
and forward,
the local influx usage, getting out to things
like MTT as well. But there's a lot
more you can do with these solutions. So
like Sean said, reach out to us. We're

(49:25):
happy to engage
and see what we can help you with.
I have a few other things before we
wrap up.
Just overall, it we've worked across nearly every
industry.
We have installations across the globe on all
continents. And like I said, we've been around
for pushing thirty years next year. So we've
seen a lot of different things,
and we really wanna talk to anyone out

(49:47):
there that maybe has some struggles that are
going on with just connectivity,
or you have any ongoing projects.
If you work in these different industries or
if there's nothing marked here and you have
anything going on that you need help with,
we're very happy to sit down and let
you know if there's there's something we can
do there.
Yeah. For those who are, listening, I mean,
we see most of the big energy

(50:08):
and consumer product,
companies on that slide. So I'm not gonna
read them off, but, it's just a lot
of car manufacturers. You know, these are these
are these, the household name brands that everybody
knows and loves. So kind of wrap some
things up here. We talked about all the
different ways that we've kind of helped solve
things in the past, but I wanna highlight

(50:28):
some of the unique ones, that we've also
gone do some, case studies on and and
success stories.
So this one I actually got to work
on,
within the last few years that, a plastic
packaging,
manufacturer
was looking to track uptime and downtime across
multiple different lines, and they had a new
cloud solution that they were already evaluating.

(50:50):
They're really excited to get into play. They
they had a lot of upside to,
getting things connected to this and start using
it. Well, what they had was a lot
of different PLCs,
a lot of different brands, different areas,
different,
you know, areas of operation that they need
to connect to.
So what they used was to first get
that into our top server, kind of similar

(51:11):
to how they showed them use in their
in our demo. We just need to get
all the data into a centralized platform first,
get that data accessible.
Then from there, once they had all that
information
into a centralized area, they used the Cogent
Data Hub as well to help aggregate that
information and transform it to be sent to
the cloud through MQTT.

(51:31):
So very similar to the demo here, this
is actually
a real use case of that. Getting information
from PLCs,
structuring it into that how that cloud system
needed it for MQTT,
and streamlining that data connection to now where
it's just running in operation.
They constantly have updates about where their lines
are in operation,

(51:52):
tracking their downtime,
tracking their uptime as well, and then being
able to do some predictive analytics in that
cloud solution based on their history.
So this really enabled them to kind of
build from what they had existing. It was
doing a lot of manual tracking, into an
entirely automated system with management
able to see real views of what's going

(52:12):
on at this operation level.
Another one I wanna talk about was we
we were able to do this success story
with,
Ace Automation.
They worked with a pharmaceutical company.
Ace Automation is a SI and they were
brought in and doing a lot of work
with some some old DDE connections,
doing some custom

(52:33):
Excel macros, and we're just having a hard
time maintaining some legacy systems that were just
a pain to deal with.
They were working with these
older files,
from some old InTouch histor HMIs,
and what they needed to do was get
something that was not just based on Excel
and doing custom macros. So one product we

(52:53):
didn't get to talk about yet, but we
also carry is our LGH file inspector.
It's able to take these files,
put them out into a standardized format like
CSV,
and also do a lot of that automation
of when when should these files be queried?
Should they be, queried for different lengths? Should
they be output to different areas? Can I
set these up in a scheduled task so

(53:14):
it can be done automatically rather than someone
having to sit down and do it manually
in Excel? So they will able to,
recover over fifty hours of engineering time with
the solution
from having to do late night calls to
troubleshoot
a, Excel macro that stopped working,
from crashing machines, because they were running a
legacy systems to still support some of the

(53:35):
DDE servers,
into saving them, you know, almost two hundred
plus hours of productivity.
Another example, if we're able to work with
a
renewable,
energy customer
that's doing a lot of innovative things across
North America,
They had a very ambitious plan to double
their footprint in the next two years.

(53:56):
And with that, they had to really look
back at their assets and see where they
currently stand,
how do we make new standards to support
us growing into what we want to be.
So with this, they had a lot of
different data sources currently.
They're all kind of siloed at the specific
areas. Nothing was really connected commonly to a

(54:17):
corporate level area of historization,
or control and security.
So again, they they were able to use
our top server
and put out a standard connectivity platform, bring
in the DataHub as an aggregation tool. So
each of these sites would have a top
server that was individually collecting data from different
devices,
and then that was able to send it

(54:38):
into a single DataHub. So now their corporate
level had an entire view of all the
information from these different plants in one single
application.
That then enabled them to connect their historian
applications
to that data hub and have a perfect
view and make visualizations
off of their entire operations.
What this allowed them to do was grow
without replacing everything.

(54:59):
And that's a big thing that we try
to strive on is
replacing and ripping out all your existing technologies.
It's not something you can do overnight.
But how do we provide value
and gain efficiency
with what's in place and providing
newer technologies on top of that without disrupting
the actual operation as well? So this was
really, really successful.

(55:20):
And at the end,
I just wanna kind of provide some other
contacts and information people can learn more.
We have a blog that goes out every
week on Thursdays.
A lot of good technical content out there.
A lot of recast of the the awesome
things we get to do here, the success
stories as well, and you can always find
that at justblog.softwaretoolbox.com.

(55:43):
And again, our main website is justsoftwaretoolbox.com.
You can get product information,
downloads,
reach out to anyone on our team.
Let's discuss what what issues you have going
on, any new projects, we'll be happy to
listen.
Well, Connor, I wanna thank you very much
for coming on the show and bringing us
up to speed on not only software toolbox,

(56:05):
but also to, you know, bring us up
to speed on top server and doing that
demo with top server and data hub.
Really appreciate that.
And,
I think, you know, like you just said,
if anybody,
has any projects
that you think these solutions may be able
to solve, please give them a give them
a call. And if you've already done something
with them, leave a comment. You know? To

(56:27):
leave a comment, no matter where you're watching
or listening to this, let us know what
you did. What did you use? Like me,
I used OmniServer all those many years ago,
and, of course, Top Server as an OPC
server. But if you guys have already used
Software Toolbox and, of course, Symbol Factory, I
use that all the time.
But if you guys are using it, let
us know in the comments. It's always great
to hear from people out there.

(56:49):
I know, you know, with thousands of you
guys listening every week, but I'd love to
hear, you know, are you using these products?
Or if you have questions, I'll funnel them
over to Connor if you put them in
the comments. So with that, Connor, did you
have anything else you wanted to cover before
we close out today's show? I think that
was it, Sean. Thanks again for having us
on. It was really fun. I hope you
enjoyed that episode, and I wanna thank Connor

(57:09):
for taking time out of his busy schedule
to come on the show and bring us
up to speed on software toolbox and their
suite of products. Really appreciated that demo at
the end too, so we actually got a
look at if you're watching. Gotta look at
their products and how they work. And, just
really appreciate them taking all of my questions.
I also appreciate the fact that Software Toolbox
sponsored this episode, meaning we were able to

(57:31):
release it to you without any ads. So
I really appreciate them. If you're doing any
business with Software Toolbox,
please thank them for sponsoring this episode. And
with that, I just wanna wish you all
good health and happiness. And until next time,
my friends,
peace.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.