That's a pretty broad brush you're tarring all FOSS creators with. I don't
accept it. I've worked with a number of Home Automation software publishers
who have literally fallen all over themselves to fix a bug or issue that I
identified and do it very quickly. If they couldn't fix it right away
because of some complex dependencies elsewhere in the code, they'd certainly
file it away in their "we'll get to it at the next major build."
Interestingly enough, when I asked for features that would benefit mostly me
and not all the other users, I always got the "we'll see" which is
parent-speak for a deferred "no." "Can I have a BB gun for Christmas?"
If you told me that, as the guy who hired you, I would simply say "It's
unfortunate that you used tools that were unsuitable for the task. Didn't
you evaluate them fully before starting?" Whose fault is it then? The poor
FOSS creator who's doing it as a labor of love or the guy trying to save a
buck by using FOSS on a paying project?
I rarely bring freeware onto a paying project but if I do, I make sure it's
functioning and that it can be properly licensed by the end user.
I've never had an issue with IrfanView, VLC, HexEdit, WinZip, PKModem (in
the very old days) and so many more products that I have just the opposite
view of FOSS than you have. What FOSS program burned you so badly that
you're willing to classify them all as drek?
That's not only for FOSS. When one of our clients asks for a new feature
I evaluate it. If it's a reasonable request and something I think most
of the users will like, I schedule it. If it's not a major project but
only of interest to that client I'll schedule it to be done some rainy
day. If it's a major change and only one client asks for it, it's time
for a PO.
Some clients are like kids. They ask for a gerbil, so you give them a
gerbil. Then they want a dog, so Lassie joins the family. They figure
they're on a roll and go for a pony. That's when you say 'we'll see.'
I don't get suckered into this sort of "endless maintenance".
Decided what you want, now. If you don't know, then *think* about it,
talk to your salespeople (surely THEY are talking to your customers?),
look at your competitors, etc. *You* are the best judge of your
market AND your company (including its commitment *to* that market).
[If you just want to give lip service to claiming that you're committed
to delivering The Best Quality, Leading Edge Products, Best Service,
etc. than *you* should know the hollowness of those promises]
I can tell you what's possible/affordable. I can refine your UI/UX
to make it more consistent or efficient. I can design for future
enhancement. But, I have NO DESIRE to perform those future enhancements.
Nor, argue with you as to what forms they might take, when they might
want to be introduced, how they will compete with previous offerings, etc.
That's "work" (boring). Just like disciplining a child.
I think most clients/customers *know* what they want. But,
they haven't "sat down" to actually "put it into words".
I.e., if *you* can get them to focus on this as a task,
you can usually lead them through all of the choices to
a definitive statement of need. And, they will actually
be happy/confident with that end result.
They just don't know how to do it themselves!
Aye, there's the rub. I kinda of disagree. Clients don't know the
capabilities of computers so they really can't spec out what they want with
a true understanding of what can be done.
I designed a library support system for a FFRDC and the head librarian said
of the database we were building: "Can't we put all the data in one big
searchable field like my Post It note program does?" The database ended up
with well over 100 discrete elements, not one single field. But it took
some seriously explaining about searching, producing meaningful reports and
accounting for interlibrary lending and how important distinct data elements
were to those capabilities. I always tried to find clients that had
reasonable expectations and some knowledge of computer systems to start
I think clients get a real understanding of what they want when they see and
play with the first iteration/prototype. I strongly believe in the line
from "The Mythical Man Month" that programmers should "build one to throw
away because you will anyway." And I always did and some programmers I know
went through more than just one total redesign.
Insert age-old cartoon of the tire swing labeled "what the client wanted and
what engineering delivered" here:
<<In engineering-driven companies, what gets built is what the engineers
think the customers want, shaped by the engineers perceptions and
pre-conceptions. Engineers often are not really in touch with their
customers. In fact, sales people often actively discourage putting the
engineers in direct contact with their customers because they?re concerned
that engineers will make blunt or impolitic comments to their customers,
which may cast sales in an unfavorable light. So in most cases engineers don
?t really have a good perspective of what the customers really want.
Further, engineers are generally enamored of the technologies they are using
and designing, and want to be able to showcase their technology to the world
to say, ?Behold world, see what I can do!? >>
That's a pretty true analysis in my experience. There's a big difference
between systems analysis and programming and not many people can do both.
The first requires almost a degree in industrial psychology, the latter, a
good technical background.
It's way more complicated than that. Once there's more than one person
helping to specify the program requirements the real fun begins.
They don't *have* to know what the capabilities of computers or any
other technology happen to be. They can *buy* that knowledge (in the
form of a consultant).
There's a difference between an implementation and a mental model of how
a customer wants something to work. My "databases" are conceptually
flat; yet, normalization causes them to be implemented in anything *but*
a "large, flat database".
A client doesn't need to understand these implementation economies. My
job is to make it *look* like he wants while not suffering the
penalties of implementing it (literally) that way.
I hide that detail. I let "users" do searches without bothering them
with details of primary/foreign keys, joins, etc. Ask the question
you want of the database and let me (it) worry about how to make
the results visible.
(Yeah, you can ask some pretty stupid/expensive questions of the
data; but, it that's what you *really* want to know... ask!)
A good designer does that "in their head" without incurring the costs
of actually *building* it. It's not hard to come up with a prospective
implementation and then run it through some "virtual" what-if cycles to
see where it excels -- and where it falls down.
You don't need to BUILD a "throw-away" house to decide that you don't
want the front entry to open DIRECTLY into your living room! Just
*think* about what the real-world consequences are likely to be
(gee, do I want all these guests tracking mud and snow into my
living room? wouldn't some sort of "mud room" be a more efficient
way to handle visitors??)
A friend made some changes (woe unto thee!) to his floorplan before
his (custom) home was built. When I looked at the house, the first
thing I noticed was the fire exit for the basement came up *under* the
wooden "deck" (which he'd "extended" to line up with an added doorway).
You just have to look at (proposed) implementations with a critical eye
and not just a "checkoff" criteria.
Exactly. Having an engineer (worse than that, a "programmer"!) make design
decisions is an abrogation of those responsibilities. The "make one (really!)
to throw away" approach just enshrines that in practice! Now, you've
made an *investment* in an approach. Even if you *can* discipline yourself
to throw it away, chances are, the implementors will still cling to some
or all of that (rejected!) "solution" -- it is something they KNOW and
already have behind them. Even discarding all the documents will still
leave that memory in their mind, (mis)guiding their next efforts.
You need to "pretend" design a solution -- commit to an "AS IF" presentation
on paper that you can then exercise (to evaluate its strengths/weaknesses)
without having anyone "invested" in any aspect of that solution.
A nonprofit, here, provides financial support for homeless students.
There's a fair bit of paperwork involved tracking each student's
eligibility for assistance, etc. (has he/she been attending classes?
making appropriate grades? what do each of his/her teachers have to
say on the subject? still homeless? etc.). And, a fair bit of
paperwork tracking the actual assistance made available to each student
(monthly stipend amount? mass transit vouchers? gift cards? emergency
assistance? medical care? etc.)
As there's a *lot* of "cash" involved, there is also plenty of opportunity
for abuse/misuse/theft/etc. Between "honest mistakes", clerical errors and
outright theft, there are lots of ways monies can "disappear" unexpectedly.
Their first approach to this was a giant spreadsheet (flat database).
So big (wide) that it was impractical to use. And, highly "manual".
They hired someone to come up with a "better" solution. Of course, that
person looked at it from a technology standpoint and essentially came
up with the same solution -- except now using a *real* database.
And, two part-time staff members who do nothing but data entry.
So, they've spent a lot of money (which could have gone to their
clientele) and have nothing better than they had originally!
Because the people making the decisions didn't *think* about their
needs and didn't have guidance wrt the types of solutions *possible*
(with technology). The organization had the "spreadsheet" model
stuck in their minds so couldn't see past that. The implementer
didn't bother THINKING about how the system was going to be USED so
he didn't challenge that model (and how impractical and error
prone it would be).
I've started documenting an alternative approach (in anticipation of
when they will abandon this current approach as "doing too little
and costing too much to maintain"):
Everything in their "system" is a tangible item: stipend checks,
gift cards, transit passes, teacher assessments, etc. Each item
*represents* some data -- but in a tangible form. I.e., theft,
abuse, misuse happens when some PHYSICAL item ends up in a place
that it *shouldn't* (e.g., a gift card ends up in the wrong "pocket").
The data entry operations (that require two part-timers) are just
a manual means of telling a machine (The System) where those items
actually reside: this gift card resides in this student's pocket
vs. in the advisor's desk drawer waiting to be gifted to a student!
So, why not let the physical items document their own whereabouts?
Just make a system that "watches" their transfer(s) from one
location (person who WAS responsible for them) to another (person
who now IS responsible for them).
Identity of an item is presented to The System; the identity of
it's NEW RECIPIENT/custodian is also presented to The System.
In effect, the system is told: This Thing is now being given to
this Person. The system then simply has to track the current
"custodian" of each "thing" -- and, an audit trail of each transfer.
Isn't this what the giant spreadsheet -- er, "database" -- tries to
do? But, in a very *manual* fashion?
Read barcoded label PRINTED on the stipend check (when you printed the
check!). Now, system knows the "thing" is a particular check (in
a particular amount, payable to a particular individual). Next,
read a barcode (mag stripe, RFID, etc.) for an ID card to indicate
the recipient of that "thing": the intended student? the advisor who
will maintain custody of the item until the student shows up to
collect it in person? the mailman who will deliver the item(s) to
their desired recipients? etc.
Now, you instantly know "where" an item is located: "Hi, I'm
here to pick up my monthly check." "Can I see your ID? ...
Ah, OK, Mary has your check; I'll call her for you!"
The same applies to gift cards (which already have unique barcode
labels printed on them by their issuers), transit passes, etc.
No tedious (error prone) data entry involved.
And, the "mental model" is a trivial one: each Thing has a current
custodian. And, can have any number of custodians before it
ultimately reaches its desired destination.
Furthermore, The System need not impose any rules on how those
transfers take place. Your "process" can change without the system
being impacted -- or being CONSENTED! An item can move from
"accounting" to a particular advisor to a different advisor
back to accounting then to a receptionist and possibly even
The Mailman (who, hopefully, delivers it as addressed!). The
system just watches, passively.
Data collection (from teachers, etc.) happens similarly. *You*
are in control of the forms that are filled out; why not "serialize"
those forms (barcode) so the system can later identify a form
just from the identifier (barcode) that *it* printed on the form
prior to it being distributed to the teacher, etc. Knowing
*which* "thing* (piece of paper) it represents lets you simplify
the data extraction from that form to one of scanning for X's
in specific boxes on the form -- and keeping a TIFF image of the
form "for your records" (and for possible manual review)
The cost to scale this "solution" to handle 500, 1000, or 5000
"clients" is constant. You don't suddenly need to hire more
data entry people to transcribe all of the data from these
forms as the number of forms increases, etc. And, your accuracy
is uncompromised. At any point, you can approach an individual and
confidently say: "The System claims you were the last recipient of
these $5,000 of gift cards. Where are they?"
I disagree. I think it just requires clear insight into the problem domain.
What I've described above isn't particularly high tech. There's no
fancy hardware involved. You could even script it with a COTS DBMS
(and some discipline on the part of users).
What it *does* require is thinking about the ACTUAL goal and not the
*imagined* goal. I.e., NOT thinking in terms of "building a database"
but, rather, "tracking physical assets". And, not getting sucked into
prematurely mapping that activity into an abstract "data entry" task.
(similar to what you would use for tracking Fantasy Football results;
That's where a consultant is a big win! It's an identifiable cost
(why are we paying this guy all this money?) so tends to get noticed.
The consultant has no political loyalties to any particular party
in the organization so can "speak his mind". Folks disagreeing
with his recommendations risk taking ownership of (or having ownership
ASSIGNED to them) a project that might *fail* if "done THEIR way", etc.
(keep your head down and don't draw attention to yourself!)
And, if/when the project succeeds, no one has gained/lost in the
transaction; the sole "gainer" is the consultant -- who isn't part of
the organization (and may never be seen again!)
You've just flipped the problem, not solved it. The likelihood that you or
most any other programmer is a subject matter expert in their field is not
very high. If you don't know their business intimately - as well as they
do - how can *you* determine what's best for them?
Systems analysis - make that *successful* systems analysis - is a highly
collaborative effort where the clients and their expertise meet with a
information professional well-versed with IT and what it can do. The
clients would NEVER be talking to an end programmer at the analysis stage
unless its a very small project.
By now a lot of companies big and small have learned what it means to depend
upon a lone star programmer who disappears. (See my post about WordStar.)
That often happens just because the HW moves on and the best system in the
world for Win2000 might not even load in another environment. People so
burned usually *don't* let it happen again.
IME, they don't need expertise in "their business"/technology. Rather,
they need help in applying that to some problem or in some market.
+ I don't (need to) understand the statistical analysis that may occur
in assessing the likelihood of a particular patient contracting a
+ I don't (need to) understand the chemistry involved in blood assays;
+ I don't (need to) understand the marketing strategy that suggest
GIFTING the assay equipment to the customer (knowing that it will
lock them into perpetually purchasing supplies/reagents FOR that
equipment at a significant margin);
+ I don't (need to) understand FDA requirements pertaining to those
reagents, the assays to which they apply, the folks who perform
those tests or the documentation requirements thereafter;
+ I don't (need to) understand why ABC is *modified* Codabar instead
of *genuine* Codabar;
+ I don't (need to) understand why said equipment must be capable of
operating for 2 hours in the absence of power (why not 3? or 1??);
I can accept all of those items as "Given". Yet, still introduce
considerable value in:
- determining how to unobtrusively "watch" the technician's actions in
performing the assay
- how to detect the placement of a few microliters (small "drop") and
identify *where* it's been placed (and if something had been placed
there already --> contaminated sample!)
- how to do so inexpensively and robustly (e.g., so the detector isn't
compromised by ambient EMI/RFI)
- how to allow the user to introduce those ABC barcode labels on different
types of media (tubes, trays, boxes) without requiring three different
- how to protect the integrity of the data that I collect
- how to design circuitry that can operate on a given "battery mass"
for the required 2 hours
- how to encode these algorithms in a programming language that can
be maintained by others
- how to implement all of this so that it doesn't require initial
or ongoing calibration
The client can spend their time on the things that are *their*
business IP -- developing blood assays and making those repeatable
and "affordable". They needn't invest in the technology required
to *apply* that technology -- they can *rent* that expertise!
Exactly. "End programmers" tend to work in cubicles and have very little
idea as to what The Bigger Picture entails. My speech synthesizer is
the sort of thing you could hand to an "end programmer" -- small (1 man
year effort) and well defined (given enough prior art and detailed
specification). What that "end programmer" needs to know can be spoon
fed to him/her in a specification. He/she has only mild interest in
the *application* ("OK, so it talks. What do *I* care WHY it's talking??")
OTOH, the system engineer needs to identify *why* this mechanism is
needed, what its role in the system will be, the limits defining its
performance, resource/development budget, the likely implementation
technology (as suggested by these other requirements), etc.
So, the "end programmer" can't just feel free to "take a PC" and "make
it talk" but, rather, is given an effective power budget: you must
be able to speak using these resources for this interval (which could
be expressed loosely as a "number of words")
Actually, you would be surprised how often this continues to occur!
I worked at a firm that based their control system on an Apple ][
computer. Long after Apple ][ computers could be *purchased*!
(tell customer that you bought the guts of the control system for
his $1M production system AT A GARAGE SALE!!)
Much of the move to dumb-down the effort required to "program"
comes from this fear of being tied to key developers. So, instead
of one or two GOOD developers, you embrace a small army of
And, find yourself missing out on key skills that are ESSENTIAL
to successful product development: e.g., system engineers!
("We'll just let one of those AVERAGE programmers take on that
About a decade ago in Chicago there was an incident where a programmer
or systems analyst (CICS) left in some sort of dispute and took his
source code with him. Code for the air traffic control system of
either O'Hare airport, or the entire Aurora regional center.
Should be news articles somewhere on the net.
I think the feds arrested him before it got straightened out.
Big companies can be ineptly managed. I was sole support for a fixed
income system at a major insurance company for about 3 years.
$50 billion in assets. They were moving the system from mainframe to
client server, but the move wouldn't happen for a year.
The investments department had undergone massive changes in the move
to client server, and ALL of the mainframe guys who knew anything
about the system were gone.
My "IT manager" came to my desk one day, and told me my contract
wasn't being renewed. Two weeks. I nodded my head and said "Okay."
We didn't talk much anyway, just exchanged pleasantries.
I was surprised Vince, my client user, hadn't told me first. We had a
good relationship, and I respected him, but I never took anything for
granted in the corporate world. Still, I was surprised, since I had
many high-level friends there, and hadn't caught wind of it.
The system was active and doing all the fixed income business,
including the CICS trading terminals.
I had a couple weeks ago converted to hourly from salaried with the
contracting company I worked for. They had boosted my rate with the
insurance company. I had become "too expensive."
Later that day I went to Vince's office to tell him what a joy it been
to work for him - I never burned bridges.
All he could say was "What?" and "I'll take care of it" and "You ain't
going nowhere" before he stormed out the office.
Anyway, I maintained that system until it's demise a year later.
Ha! I wouldn't doubt it! People/firms don't really know where
their "value" resides. I've had clients contact me to recover
the source code from ROM images -- because they mistakenly thought
that having the ROMs was all they needed to *produce* their
product (which is technically true -- as long as you never intend
to CHANGE that product! :> )
E.g., thinking potato chips and computer chips are of equal value
to an economy...
Something about "left hand" and "right hand" comes to mind :>
A firm I worked at early in my career did a lot of subcontract work for
an IBM division. One of my friends, there, being the "heart and soul"
of that business at our firm.
He ended up leaving because he got fed up with *his* boss (who would
denigrate his work, take credit for *his* efforts, etc.). That boss
(and above) went out of their way to hide my buddy's departure from
the client (IBM).
One day, the big wigs from the IBM division flew into town and
demanded a meeting with our top management. When the conference
room door closed, the first words from their mouth were:
"Does <John Doe> work here, or not?"
I.e., rather than being open with them about this change in KEY
(*essential*) personnel, they had hoped to hide that fact -- lest
they also have to explain why <John Doe> had left the firm (and
risk <John Doe> giving them "an earful" -- much to the dismay of
The amount of business conducted with IBM thereafter steadily decreased.
Probably not from a decrease in quality but, rather, from this (stupid)
personal business fallout.
Almost the exact same thing just happened to my wife. She's contracted out
and the company decided she should retire (and they would put a worker in at
half her rate). The client went down to the contractor and met with one of
the VP's. Problem solved when he said he would not re-up the contract
without her. Weeks of worry evaporated instantly. She's happy to know that
she's well enough appreciated for her client to visit home base on her
It's not supposed to happen but we have a couple of clients that call or
email me directly. They're happy campers. They don't have a problem with
running beta software and get their stuff in days rather than the months
required for the formal process. I hear a little grumbling about my
cowboy ways but those are balanced by two clients that give us glowing
When's "the next major build"? Is it before or *after* my release?
E.g., it is not unusual for a database to be a read-only construct and
still be useful. In fact, it can be a *desirable* characteristic
of a database -- the CERTAINTY that the contents CAN NOT be altered!
I.e., put the "data" on read-only media. Try as the software might
(bugs, malware), there's simply NO WAY to alter the persistent copy of
I rely on this capability in my current project. I.e., the data
resides *in* read-only memory. *If* the DBMS expects to be able
to alter it FOR WHATEVER REASON, that action WILL fail! There
is simply no way to write to the memory even if you deliberately
tried to do so!
Requests for this feature/capability are simply not important enough
(apparently) to rise to the level of "active development" (for
PostgreSQL). The feature *is* apparently supported under Oracle.
And, IIRC, under MySQL (though possibly as a kludge).
Given that I've adopted the philosophy that everything I'm building must
be available under an "open" (nonGPL) license, Oracle is out of the
question. As I've not been impressed with MySQL, that leaves me
with the only choice of taking ownership of a PostgreSQL release and
adding the features that *I* want *to* that release -- taking full
advantage of the license terms to do so as a "spin-off" codebase.
Of course, my priorities aren't the same as the PostgreSQL development
team -- nor its user base. So, my modifications will be of little
direct use to them. *But*, they'll meet *my* needs. If, at some
future date, someone wants to backport new PostgreSQL features to
my implementation; or, port my features to -CURRENT, that's entirely
up to them to do so -- without it impacting *my* efforts, time table,
Exactly. I spend $15-70K/year on tools. Because I *don't* want to ever be
wondering why a *tool* doesn't do what I expect it to do. Saving a few
dollars (or, even a few tens of kilodollar) doesn't make sense when my
reputation and a client's *product* (plus his reputation) are on the line.
I did projects 30 years ago that FOSS software *still* isn't up to the
task to address! Let alone do so efficiently and with minimal effort.
The bigger problem was the fact that the OS didn't isolate the
applications from the hardware. So, you had the CP/M mentality bleeding into
the PC world -- developers thinking they could freely play with aspects
of the underlying hardware "at will". Until those aspects didn't exist
in some variant of the machine they *expected* to encounter.
FutureNet/Data I/O had probably the nicest schematic entry (capture)
system at the time (DASH/STRIDES). They had a whole suite of related
EDA tools -- logic synthesis, device programming, etc.
But, were super paranoid about copy protecting EVERYTHING they
sold! And, tried a bunch of different *hardware* approaches to
the problem (even adding "protection" to a many thousand dollar add-in
"coprocessor" card that was required for some of their tools! WTF?
I have to buy this expensive board *and* another board in case I happened
to acquire the expensive board "for free"???)
One scheme used small registered PAL's (programmable logic devices...
precursors to FPGA's) as "keys". The thinking was that they could
implement a finite state machine (FSM) *in* the PAL using the PAL's
internal register to store the "current state". Then, supply new
"inputs" to the PAL from software running in the application and,
KNOWING how the FSM was designed, they could PREDICT the new state
that the FSM would enter given its "current state" and the supplied
If you didn't know the logic governing the FSM's operation, you
wouldn't be able to predict the next state for all possible
input conditions (and for all possible "current states"!).
I wrote a tiny little program (two pages?) that would walk the
FSM through every possible set of states, applying every possible
set of inputs to each -- and recording the "next state" that the
FSM progressed into for each of these cases.
[This is actually tricky because you don't know where you will end up
at any given time -- yet, have to ensure you travel down every possible
path. Sort of like being deposited in a city and tasked with
making a map of all the roads -- without being able to *see* down any
of them! "I wonder where *this* will take me?" Obviously, you don't want
to keep taking the same path over and over again. Yet, need a means
of "discovering" every path that you aren't even aware of, currently!]
Armed with this "map", I then used the logic synthesis tool from that
same paranoid vendor to convert the verbose map into a concise set
of equations. The same set of equations that yet another of their
tools would then *burn* into a virgin PAL device -- giving me a
duplicate, counterfeit copy of the genuine "key"... all using THEIR
tools to do so!
[Of course, I had to own a legitimate key to begin with. All I've
done is come up with a way of creating a backup copy of that key!]
Don't you love that? The IBM 5110 had a way to 'lock' the BASIC source
code on the floppies. They also threw a binary disk editor into the
software set. Lemme see... locked source has these bits in the header
set, unlocked doesn't. Bingo!
Yeah. Something about "left hand" and "right hand" and their knowledge of
each other's actions springs to mind! :>
I was tasked with bringing up a fairly large bit of ATE on a subcontract to
one of the IBM divisions. Used a "series 1" (IIRC) minicomputer to drive
the test program (used to verify that the device we had built was
performing according to spec).
Test procedure (script of some sort) resided on an 8" floppy.
Many of the tests were incredibly long (time wise) -- e.g., memory test.
As a result, if you ran those tests in the test suite, you were
unable to get many "passes" in an 8 hour shift. And, tedious to
select just *one* particular test (esp if you've made some change
to the device that could cause some other test to fail -- without
your being aware of that fact cuz you're focused on another test!).
System came with facilities to "edit files" (IIRC, similar to a hex editor
as the "script" wasn't in an "English" language). So, I'd routinely
patch the test floppy to eliminate the lengthy tests (that I "knew"
would pass) so i could concentrate on the tests that were catching
Time for sell-off came. IBM technician came out. Lots of handshakes
all around (they're happy cuz we're done; we're happy cuz we're getting
PAID! :> ).
Sat down for the test (hours!). Memory test came up and announced
Testing memory. Go for coffee.
Technician frowned. Test never said "Go for coffee" before!
He calmly opened his briefcase and pulled out *his* copy of
the floppy and said, "Shall we start, again?"
[No animosity. No suspicion that we were trying to pull a fast
one -- device passed with flying colors. *But*, he'd seen
something that shouldn't have been there, so... <shrug> Of
course, my boss knew who the "wise guy" had been...]
HomeOwnersHub.com is a website for homeowners and building and maintenance pros. It is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.