Log in

View Full Version : Glass Panel Longevity


john smith
October 18th 06, 04:19 PM
The recent thread regarding the lack of parts for the Garmin 480 got me
to wondering just how long the G-1000's will "live"?
Steam gauges are forever, but integrated circuits are produced for a
given period, then production is ceased as newer chips come along.
Does Garmin mention anywhere how long they will support their products?
Their earliest GPS handhelds are coming up on 20 years.
We have seen Lowrence discontinue support for some of their products
that are less than 10 years old.

John Theune
October 18th 06, 04:24 PM
john smith wrote:
> The recent thread regarding the lack of parts for the Garmin 480 got me
> to wondering just how long the G-1000's will "live"?
> Steam gauges are forever, but integrated circuits are produced for a
> given period, then production is ceased as newer chips come along.
> Does Garmin mention anywhere how long they will support their products?
> Their earliest GPS handhelds are coming up on 20 years.
> We have seen Lowrence discontinue support for some of their products
> that are less than 10 years old.
I guess the more important question is how long will they support it. A
chip may go out of production but will they commit to coming out with a
new board to replace the old one containing a chip that is no longer
available. I think it all comes down to money. It's one thing to
obsolete a 200.00 handheld, it's another to do so to a 30,000 nav
system. I'm going to guess that they will come out for replacement
modules for a while because they can make enough money on the
service/parts to justify it.

ktbr
October 18th 06, 04:31 PM
It's not so much the chips as the software. Microsoft
ceases support for Windows 98 and previous versions as
most other companies who produce software (embedded or
otherwise).

They want (and need) to push you on to newer platforms
and the only way they can do it is to eventually cease
support for older systems.

October 18th 06, 05:36 PM
john smith wrote:
> The recent thread regarding the lack of parts for the Garmin 480 got me
> to wondering just how long the G-1000's will "live"?
> Steam gauges are forever, but integrated circuits are produced for a
> given period, then production is ceased as newer chips come along.
> Does Garmin mention anywhere how long they will support their products?
> Their earliest GPS handhelds are coming up on 20 years.
> We have seen Lowrence discontinue support for some of their products
> that are less than 10 years old.

Typically avionics manufacturers stockpile components that go out of
production. Most IC manufacturers make end of live announcments, and
give the consumers of these parts the option to make a lifetime buy.
Garmin can buy up as many 1000's of each critical component as they
think they will need to support their products, and then eventually
redesign around the newer available components when they see fit.

The real question is "how committed is Garmin to supporting product X
after date Y".

Dean

Mxsmanic
October 18th 06, 05:51 PM
john smith writes:

> The recent thread regarding the lack of parts for the Garmin 480 got me
> to wondering just how long the G-1000's will "live"?
> Steam gauges are forever, but integrated circuits are produced for a
> given period, then production is ceased as newer chips come along.
> Does Garmin mention anywhere how long they will support their products?
> Their earliest GPS handhelds are coming up on 20 years.
> We have seen Lowrence discontinue support for some of their products
> that are less than 10 years old.

It's best not to assume that any for-profit company will continue to
support a product that no longer generates substantial amounts of
revenue, unless it is required to do so by law.

I expect that glass cockpits will pretty much follow the past of PCs,
unless legislation prevents it.

--
Transpose mxsmanic and gmail to reach me by e-mail.

Jim Logajan
October 18th 06, 06:33 PM
john smith > wrote:
> Steam gauges are forever,

They are?

Is the same model artificial horizon designed decades ago still
manufactured today?

Do people repair mechanical gauges or simply replace them when they stop
working?

Barney Rubble
October 19th 06, 06:58 PM
Don't forget the GPS-90, which no longer has updates from either Garmin or
Jepp, so the question is for both of them, the G1000 would be useless if
Jepp pull the plug. The price of keeping all this data up to date in these
modern machines is a very real hidden cost (XM, nav data etc).


"john smith" > wrote in message
...
> The recent thread regarding the lack of parts for the Garmin 480 got me
> to wondering just how long the G-1000's will "live"?
> Steam gauges are forever, but integrated circuits are produced for a
> given period, then production is ceased as newer chips come along.
> Does Garmin mention anywhere how long they will support their products?
> Their earliest GPS handhelds are coming up on 20 years.
> We have seen Lowrence discontinue support for some of their products
> that are less than 10 years old.

Guy Elden Jr
October 19th 06, 07:03 PM
> Do people repair mechanical gauges or simply replace them when they stop
> working?

It's a lot easier / cheaper to replace one mechanical gauge than an
entire instrument panel.

--
Guy

Mxsmanic
October 19th 06, 08:05 PM
Guy Elden Jr writes:

> It's a lot easier / cheaper to replace one mechanical gauge than an
> entire instrument panel.

And it's a lot easier to survive in flight with one failing gauge than
with an entire failing panel.

--
Transpose mxsmanic and gmail to reach me by e-mail.

Grumman-581[_3_]
October 19th 06, 08:09 PM
"Jim Logajan" > wrote in message
.. .
> Is the same model artificial horizon designed decades ago still
> manufactured today?
>
> Do people repair mechanical gauges or simply replace them when they stop
> working?

In a lot of cases, it's not so much if the item can be repaired, but whether
it is cost effective given the shop rate for the repair guy... If it wasn't
for the added cost (that gets passed on to us, of course) of FAA
certification, it would probably be cheaper for most items to be replaced
instead of repaired... After getting burnt on radio repairs a couple of
times for my old Narco, I replaced it with an MX-11 like was in my other
radio slot... With repairs to the Narco running a few hundred dollars a pop,
I could have bought the MX-11 with the money that I wasted on the Narco
repairs... Since I still ended up buying the MX-11, all that money was
wasted... A MX-11 runs around $900 these days and installation is just a
slide in replacment for the Narco that it replaces and as such, you don't
need an A&P or avionics shop to do the replacement... If it wasn't for the
cost of FAA certification, I suspect that the MX-11s might approach the cost
of CB radios... It's not unreasonable to think that their price might drop
to the $100-200 range... At that price, repairs start getting the same as
the cost of a new radio, so it's more unlikely that someone would choose to
repair the item... Since the newer circuit boards are less component
repairable, technicians are more likely to just be replacing a complete
subassembly board instead of troubleshooting down to a component level...
This saves some time (i.e. money) in the troubleshooting stage, but it
increases the price in the repair parts stage...

Personally, I'm not a big fan of the one system does everything approach in
some of the glass panels... I have no problem with mechanical gauges being
replaced with electronic gauges, but I would prefer for them to be
independent, possibly communicating to some other system through some sort
of standard interface... At Rockwell, many of their new systems were
communicating via TCP/IP packets... I kind of liked this approach... It
seemed rather simple and elegant... A device would have a particular IP
address and port number associated with it... You could send information to
that device or retrieve information from it as appropriate... For a
non-compliant device, you could just design a TCP/IP interface to the device
that translated from the proprietary device information format to the TCP/IP
format...

Mxsmanic
October 19th 06, 09:01 PM
"Grumman-581" > writes:

> Personally, I'm not a big fan of the one system does everything approach in
> some of the glass panels... I have no problem with mechanical gauges being
> replaced with electronic gauges, but I would prefer for them to be
> independent, possibly communicating to some other system through some sort
> of standard interface... At Rockwell, many of their new systems were
> communicating via TCP/IP packets... I kind of liked this approach... It
> seemed rather simple and elegant... A device would have a particular IP
> address and port number associated with it... You could send information to
> that device or retrieve information from it as appropriate... For a
> non-compliant device, you could just design a TCP/IP interface to the device
> that translated from the proprietary device information format to the TCP/IP
> format...

Minimizing the software improves reliability and safety. TCP/IP
interfaces generally require software, and that's not a good thing.

--
Transpose mxsmanic and gmail to reach me by e-mail.

John Theune
October 19th 06, 09:09 PM
Mxsmanic wrote:
> Guy Elden Jr writes:
>
>> It's a lot easier / cheaper to replace one mechanical gauge than an
>> entire instrument panel.
>
> And it's a lot easier to survive in flight with one failing gauge than
> with an entire failing panel.
>
I guess that's why they have 2 and redundant steam gauges.

Sylvain
October 19th 06, 10:57 PM
Guy Elden Jr wrote:

>> Do people repair mechanical gauges or simply replace them when they stop
>> working?
>
> It's a lot easier / cheaper to replace one mechanical gauge than an
> entire instrument panel.

I don't know much about the G1000, but I am currently reading Max
Trescott's book on the subject; isn't the whole idea of this system
that it is made up of easily serviceable and replaceable (and
presumably upgradable) modules?

--Sylvain

Rich Anderson
October 20th 06, 12:06 AM
john smith wrote:
Steam gauges are forever,

They are?

Is the same model artificial horizon designed decades ago still
manufactured today?

Do people repair mechanical gauges or simply replace them when they stop
working?

Jim,

I make a very good living repairing mechanical instruments. My company (The Gyro House) repairs over 500 instruments every month. It often amazes me how old some of those instruments are, many are older than me and I'll be 53 in December. It is true that it is difficult to find parts for some of the older instruments, but it is just as difficult to find chips for electronic instruments that were built in the 1970 -1990 time period. Again I can say this with certainty as my shop works on electronic instruments as well as the mechanical ones.

Concerning replacing instruments: it is sometimes the case that an instrument can become so worn out that it is no longer repairable,however out of the over 500 we do per month I can confidently say that less than 2% are found to be non-repairable.


Rich

Neil Gould
October 20th 06, 10:30 AM
Recently, Sylvain > posted:

> Guy Elden Jr wrote:
>
>>> Do people repair mechanical gauges or simply replace them when they
>>> stop working?
>>
>> It's a lot easier / cheaper to replace one mechanical gauge than an
>> entire instrument panel.
>
> I don't know much about the G1000, but I am currently reading Max
> Trescott's book on the subject; isn't the whole idea of this system
> that it is made up of easily serviceable and replaceable (and
> presumably upgradable) modules?
>
I don't know, but I would design the system that way. Even at the level of
integrated circuits, there are plug-in replacements for obsolete parts,
and I don't see any advantage to using unique components in this kind of
application.

Neil

October 20th 06, 03:47 PM
Neil Gould wrote:
> Recently, Sylvain > posted:
>
> > Guy Elden Jr wrote:
> >
> >>> Do people repair mechanical gauges or simply replace them when they
> >>> stop working?
> >>
> >> It's a lot easier / cheaper to replace one mechanical gauge than an
> >> entire instrument panel.

When digital watches first came out, they cost a lot more than
the old wind-ups even though they cost far less to produce. As more
manufacturers got into the act, the cost came down to more reasonable
levels.
I had an attitude indicator overhauled the other day. Cost
$675 Canadian. This stuff is only going to get more expensive as labour
goes up, since it can't be totally assembled by some robot. The life of
a gyro is rather short, too, especially in an operation like ours where
starting the airplane in cold weather is hard on gyro bearings. Engine
vibration eats gyros, and dry vacuum pumps last maybe 1200 hours.
If there are enough EFIS systems in use when a manufacturer
quits making them, some aftermarket manufacturer will find profit in
making replacement boards and displays for them under PMA rules. And as
more companies start making them, the up-front costs will come down. It
won't be instrument replacement costs that finally ground us; it will
be lawyers and insurance companies and heavyhanded regulation.

Dan

Grumman-581[_3_]
October 20th 06, 09:54 PM
"Neil Gould" > wrote in message
t...
> I don't know, but I would design the system that way. Even at the level of
> integrated circuits, there are plug-in replacements for obsolete parts,
> and I don't see any advantage to using unique components in this kind of
> application.

Unfortunately, some of the people making the decisions in these companies
don't necessarily see it that way... I was looking for a way to hook up the
output from my Northstar M1A LORAN to my laptop a few years ago so that I
could use it as input to a situation awareness program that I was writing...
Although NMEA 0183 was used by all the handheld LORANs and GPSs at that
time, it seems that Northstar chose to use a proprietary format for the data
stream coming out of their unit... While talking with them, I learned that
this was not uncommon within the avionics industry... A system like the
Argus moving map had to be able to understand all the possible data formats
of the various units that it supported...

Standards are great if everyone agrees to support them... Supposedly FireFox
is a true W3C compliant browser (unlike MS's IE)... I've encountered various
web sites that do not work correctly with it, but they work with IE... It
seems that developers get sloppy in that IE allows them to get away with
things that are not W3C compliant... Hell, I've even had to go back and
modify some of my own web apps that I created in the pre-FireFox days to
make them work with FireFox... Luckily it's only been needing to add
"document.getElementById" for each field accessed by a JavaScript function
variable... It seems that IE allowed you to be lazy and not require this...

Oh well, I'm digressing... The point is, don't assume that companies will
make decisions that will give you the most flexibility... They have a vested
interest in tying you to their products... Even if they have a common
interface like the TCP/IP interface that Rockwell was using on the systems
that I worked on, it doesn't necessarily help unless there is a standardized
command packet format... Otherwise, you will find yourself with one device
that although physically able to talk to another device, they might not be
able to understand what each other are saying... With some devices, it might
not be that difficult to come up with a common message protocol that the
device could support, with others, this could be quite extensive... For
example, consider the following devices and what they might need:

ADF:
1. Set frequency
2. Get frequency
3. Get bearing to transmitter
4. Enable audio output
5. Disable audio output

Transponder:
1. Set squawk code
2. Get squawk code
3. Set current mode (standby, Mode-A, Mode-C, Mode-S)
4. Get current mode
5. Initiate IDENT
6. Get IDENT status

Of course, every unit would also need a "Get system health / status" message
for retrieval internal diagnostics... It would be *nice* to know when a
particular device could not be relied upon... <grin>

I would like to see a system where you could put the actual measuring
devices in one location and the panel only needed to contain the devices
that display the information... For example, you could buy a small 2"-4"
generic display that could be set to display the output for various types of
devices... If one of the displays was acting up, you could change another
display so that it would display the output from the particular measuring
device... One advantage of this might be that although you might have
redundant measuring devices, only one display for the pair might need to be
on the panel... A failed health check might cause an indication to the pilot
that the backup device needs to be made active... Maybe it would even be
possible to toggle between the two devices...

Hell, as long as we're at it, let's give it a panel mount plug so that we
can plug our laptop PC into it also... If you had the building blocks in
place with this sort of TCP/IP controlled devices, just think of what sort
of flexibility that you could get without having to buy a $20K+ avionics
package... Of course, my point of view is as a VFR pilot would would like
some of the capabilities of the flight director type systems, but are not
able (either from a monetary or a physical panel space point of view) to put
one in their aircraft...

Sylvain
October 21st 06, 12:47 AM
Grumman-581 wrote:

> Unfortunately, some of the people making the decisions in these companies
> don't necessarily see it that way...

so in short, the only reasonable long term stragegy would be to make
these things open source; I couldn't agree more, but how do you
go about achieving this? the manufacturer must either be coerced in
doing it (via regulations) or have a good incentive, i.e., a
painfully obvious -- as in, that even the most bone-headed MBA
waiving PHB manager could understand -- evidence that it would
be in their best interest to open up at least the interface
specs. But considering what I have seen so far in the industry
I am not holding my breath...

--Sylvain

Mxsmanic
October 21st 06, 01:00 AM
Sylvain writes:

> so in short, the only reasonable long term stragegy would be to make
> these things open source; I couldn't agree more, but how do you
> go about achieving this? the manufacturer must either be coerced in
> doing it (via regulations) or have a good incentive, i.e., a
> painfully obvious -- as in, that even the most bone-headed MBA
> waiving PHB manager could understand -- evidence that it would
> be in their best interest to open up at least the interface
> specs. But considering what I have seen so far in the industry
> I am not holding my breath...

How do you certify open source?

--
Transpose mxsmanic and gmail to reach me by e-mail.

Grumman-581[_1_]
October 21st 06, 01:05 AM
Sylvain wrote:
> so in short, the only reasonable long term stragegy would be to make
> these things open source; I couldn't agree more, but how do you
> go about achieving this? the manufacturer must either be coerced in
> doing it (via regulations) or have a good incentive, i.e., a
> painfully obvious -- as in, that even the most bone-headed MBA
> waiving PHB manager could understand -- evidence that it would
> be in their best interest to open up at least the interface
> specs. But considering what I have seen so far in the industry
> I am not holding my breath...

Unfortunately, you're right about this... I am not all that crazy about
the idea of regulatory standards though... Once you get the government
involved in something, often, it gets rapidly screwed up...

When I was wanting to interface a program with the Northstar M1A LORAN,
it took a bit of digging to find out who to contact, but they did not
have a problem sending me a copy of the actual interface document...
Yeah, it was probably a Xerox copy of a copy of a copy, but it was
sufficiently readable and I did not have a problem understanding what
would be necessary to write a program to interface with it... Still, the
problem is that if I wanted to also make the program work with a
different unit, I would have to get the interface document and write
code for the different unit... Hell, even with NMEA 0183 that is output
by nearly every handheld GPS, the output stream from one manufacturer
might vary from what a different manufacturer might choose to send...
Some manufacturers use one particular NMEA 0183 sentence for position
reporting whereas another manufacturer might use a different sentence...
If you look at some of the PC based moving map programs, you'll see that
they might give you an option on the type of GPS that you have... If
they have a pretty good selection of them and your GPS is not one of
them, it's at least possible that your GPS will be similar enough to one
of the ones that they do support for it to work for you... On the other
hand, some of the GPS manufacturers utilize a binary NMEA format instead
of the standard text based one...

Sylvain
October 21st 06, 01:07 AM
Mxsmanic wrote:
> How do you certify open source?

why should it be any different than proprietary stuff?

--Sylvain

Mxsmanic
October 21st 06, 01:29 AM
Sylvain writes:

> why should it be any different than proprietary stuff?

A total lack of control is one huge difference. A total lack of
accountability and liability is another. A total lack of
customer-oriented incentive for fixes and improvements is still
another. There are many differences.

--
Transpose mxsmanic and gmail to reach me by e-mail.

Sylvain
October 21st 06, 01:31 AM
Mxsmanic wrote:

>> why should it be any different than proprietary stuff?
>
> A total lack of control is one huge difference. A total lack of
> accountability and liability is another. A total lack of
> customer-oriented incentive for fixes and improvements is still
> another. There are many differences.

you just made it clear that you do not understand how open
source development works. I don't even know where to start...

--Sylvain

Bob Noel
October 21st 06, 02:19 AM
In article >,
Sylvain > wrote:

> you just made it clear that you do not understand how open
> source development works. I don't even know where to start...

not only that, but the troll doesn't understand anything about
certification. For those that care to learn, open source doesn't
impact certification. In all cases there must be configuration
control of the software and hardware.

--
Bob Noel
Looking for a sig the
lawyers will hate

Mxsmanic
October 21st 06, 08:26 AM
Sylvain writes:

> you just made it clear that you do not understand how open
> source development works.

I understand exactly how it works, and so does the market, which is
why safety-of-life software (and much other mission-critical software)
tends to be proprietary.

--
Transpose mxsmanic and gmail to reach me by e-mail.

Grumman-581[_1_]
October 21st 06, 09:01 AM
Sylvain wrote:
> you just made it clear that you do not understand how open
> source development works. I don't even know where to start...

Well, you could start by killfiling him like many of the rest of us have
done... Whether he is a troll or just an idiot, he's really not worth
wasting time on...

Grumman-581[_1_]
October 21st 06, 09:14 AM
Bob Noel wrote:
> not only that, but the troll doesn't understand anything about
> certification. For those that care to learn, open source doesn't
> impact certification. In all cases there must be configuration
> control of the software and hardware.

Given the same inputs, the software should give the exact same results
when run multiple times... That is one of the reasons that I prefer to
have all my libraries statically linked to an executable instead of
using shared / dynamic linked libraries, OCXs, or whatever the MS term
of the day is for it... If my entire executable is contained in a single
file, I know that if I wrote it right, it will run the same way every
time I execute it... The idea that someone could change something on the
system (update a shared library, DLL, or whatever) and cause my program
to run differently is rather offensive to me... That's also one of the
reasons that I do not like Java... I went through the Y2K mess and saw
the problems that developed from 3rd party DLLs and such and how bugs
could be introduced into a system by something that you have no control
over and I didn't like it...

Neil Gould
October 21st 06, 12:32 PM
Recently, Grumman-581 > posted:

> "Neil Gould" > wrote in message
> t...
>> I don't know, but I would design the system that way. Even at the
>> level of integrated circuits, there are plug-in replacements for
>> obsolete parts, and I don't see any advantage to using unique
>> components in this kind of application.
>
> Unfortunately, some of the people making the decisions in these
> companies don't necessarily see it that way...
> (rest snipped for brevity)
>
Just to be clear, I was referring to hardware components -- "chips", etc.
In hardware, there is usually more than one way to accomplish the same
task, and as specific ICs become obsolete, there is usually (not always) a
plug-in replacement available. The best designs will be based on
high-volume usage ICs, as those are most likely to be replicated or
replaced in the future.

I completely agree with what you are saying about software and interfaces.
All bets are off, and given the willingness of the public to endure
practices that pretty much assure them of having to spend money repeatedly
to gain marginal capability -- or in many cases to gain nothing -- I don't
see much hope of this changing.

But, if the subject is simply keeping a specific device working over a
long period of time, as I see it, the crossover between these two issues
is likely to be the availability of interface connectors, but even at
that, there are some work-arounds.

Neil

Roger (K8RI)
October 22nd 06, 02:54 AM
On Fri, 20 Oct 2006 17:31:51 -0700, Sylvain > wrote:

>Mxsmanic wrote:
>
>>> why should it be any different than proprietary stuff?
>>
>> A total lack of control is one huge difference. A total lack of
>> accountability and liability is another. A total lack of
>> customer-oriented incentive for fixes and improvements is still
>> another. There are many differences.
>
>you just made it clear that you do not understand how open
>source development works. I don't even know where to start...

It is exasperating isn't it?

Roughly without going into a lot of detail:

Two things. The words "Open source" mean exactly what they say. The
source conde must be included with the application.

The certification of any Open Source application is no different than
the certification of propritary software.
>
>--Sylvain
Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Roger (K8RI)
October 22nd 06, 03:10 AM
On Sat, 21 Oct 2006 08:14:06 GMT, Grumman-581
> wrote:

>Bob Noel wrote:
>> not only that, but the troll doesn't understand anything about
>> certification. For those that care to learn, open source doesn't
>> impact certification. In all cases there must be configuration
>> control of the software and hardware.
>
>Given the same inputs, the software should give the exact same results
>when run multiple times... That is one of the reasons that I prefer to
>have all my libraries statically linked to an executable instead of
>using shared / dynamic linked libraries, OCXs, or whatever the MS term

DLLs work great and are easy to use but things can get interesting if
some one changes one as in an MS update<:-)) They don't bother to
tell you what they changed in which module. They are both the strong
and weak points of the MS operating systems.

>of the day is for it... If my entire executable is contained in a single
>file, I know that if I wrote it right, it will run the same way every
>time I execute it...

Well, usually as computers have been known to get confused. <:-))

>The idea that someone could change something on the
>system (update a shared library, DLL, or whatever) and cause my program

Man,it must be boring with programs that do the same thing every time.
Where's you sense of adventure. The thrill of hunting for side
effects that weren't there when you first debugged the program. Side
effects caused in some code written by some one else some where in a
library that may contain hundreds of thousands of lines of code.

>to run differently is rather offensive to me... That's also one of the
>reasons that I do not like Java... I went through the Y2K mess and saw
>the problems that developed from 3rd party DLLs and such and how bugs

Third party DLLs *could* be written in such a manner that they would
not cause problems but that's not the case. Plus they would make
certification almost impossible as the program code can change without
being checked against certification. I wonder when you refer to the
Y2K mess if you might be referring to a very large drafting program
where the programmers rewrote some standard DLLs? Update the OS, or
DLL and the program would quit. Put the original DLL back and strange
things could start happening to other stuff.

Then there was VB up through I believe version 5. Write a program,
compile it as stand alone so it contained all the necessary DLLs and
guess what. The DLL creation date was not the original creation date,
but the date the program was compiled. So installing that program
would (not could) result in newer DLLs being over written by older
DLLs with a newer creation date. Man but that one about drove me nuts
trying to sort out.

>could be introduced into a system by something that you have no control
>over and I didn't like it...

What? You don't like "side effects" and here I thought they were a
feature and not a bug. <:-)) No sense of adventure at all.
Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Roger (K8RI)
October 22nd 06, 03:36 AM
On Sat, 21 Oct 2006 09:26:37 +0200, Mxsmanic >
wrote:

>Sylvain writes:
>
>> you just made it clear that you do not understand how open
>> source development works.
>
>I understand exactly how it works, and so does the market, which is
>why safety-of-life software (and much other mission-critical software)
>tends to be proprietary.

I know I shouldn't do this but... proprietary and safety-of-life
software are linked not because of safety but because of "for profit".
IE they are proprietary because they do not want others to know how
they are doing what they are doing.
Open source would mean providing the source code and they don't want
that. Hence the de compiling and reverse engineering clauses in the
license as well

The only thing difficult about de compiling or disassembling is the
size of the current state and next state arrays for today's
processors. In the days of the 6502 and straight C they were small,
compact, fast, and easy to write.
In the case of compilers and de compilers you only need to change the
two arrays to change languages or processors. Today the arrays for an
assembler/disassemble are huge. What's the size of the instruction set
for a late model Athlon or Pentium? The set for a 6502 or Z80 would
fit onto 4 sheets the size of a bingo card and by chance were referred
to as bingo cards.

Any good software engineer could write a disassembler and decompiler
to turn that proprietary software into readable code, (there's a bit
more to it that but I'll leave it there) BUT who'd want to. You'd then
have to sift through many thousands of lines of code. Any company that
did that would need a team to do the sifting and the feds tend to
frown on the whole process.

In my undergrad work I had to write a compiler in one term. In
graduate school the same class with the same book (different
university) was split between two terms (one of which was 500 level)
and was 8 credit hours instead of 4. They wouldn't let me take it
again <:-))
Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Roger (K8RI)
October 22nd 06, 03:44 AM
On Fri, 20 Oct 2006 16:47:48 -0700, Sylvain > wrote:

>Grumman-581 wrote:
>
>> Unfortunately, some of the people making the decisions in these companies
>> don't necessarily see it that way...
>
>so in short, the only reasonable long term stragegy would be to make
>these things open source; I couldn't agree more, but how do you
>go about achieving this? the manufacturer must either be coerced in
>doing it (via regulations) or have a good incentive, i.e., a

Even if "open source" you have not addressed the hard ware you have in
hand issue. It would make it easier to replace with something more
modern, BUT what ever goes in would need to be certified. But for a
major piece of hard ware having a chip fail that is no longer
available...what do you do? Developing a replacement board, even if
you know the signal config into and out of the board, is going to be
expensive to the point of not being economically viable.

What would be nice would be some sort of standardization for the I/O
protocols instead of every one doing it "their own way". Of course I
think the same thing about the controls.

>painfully obvious -- as in, that even the most bone-headed MBA
>waiving PHB manager could understand -- evidence that it would
>be in their best interest to open up at least the interface
>specs. But considering what I have seen so far in the industry
>I am not holding my breath...
>
>--Sylvain
Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Roger (K8RI)
October 22nd 06, 03:48 AM
On Thu, 19 Oct 2006 12:58:12 -0500, "Barney Rubble"
> wrote:

>Don't forget the GPS-90, which no longer has updates from either Garmin or
>Jepp, so the question is for both of them, the G1000 would be useless if
>Jepp pull the plug. The price of keeping all this data up to date in these
>modern machines is a very real hidden cost (XM, nav data etc).
>

I wouldn't call it hidden. It's right there just like a chart
subscription and it's in a propritary format so we can't download and
install the updated goverment information.

>
>"john smith" > wrote in message
...
>> The recent thread regarding the lack of parts for the Garmin 480 got me
>> to wondering just how long the G-1000's will "live"?
>> Steam gauges are forever, but integrated circuits are produced for a
>> given period, then production is ceased as newer chips come along.
>> Does Garmin mention anywhere how long they will support their products?
>> Their earliest GPS handhelds are coming up on 20 years.
>> We have seen Lowrence discontinue support for some of their products
>> that are less than 10 years old.
>
Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Grumman-581[_3_]
October 22nd 06, 04:58 AM
"Roger (K8RI)" > wrote in message
...
> DLLs work great and are easy to use but things can get interesting if
> some one changes one as in an MS update<:-)) They don't bother to
> tell you what they changed in which module. They are both the strong
> and weak points of the MS operating systems.

Yeah, you ask 'em about it and you get "nawh, nothing changed"... Once you
finally trace it down to something that is acting differently in their DLL,
you get a "oh, *that*... I forgot about that, but it *shouldn't* make a
difference"... If I'm writing an interface that is to be used by other
developers, especially if they are developing in a different language than I
am, I'll make it a DLL, but I'll also create a static library in case they
have the capability to link directly to it... Although they provide a lot of
flexibility and initially I like them, I've come to believe that they create
more headaches at a later date than they are often worth... When trying to
debug a user's problem at a later date, you need to determine if he is using
the same exact versions of all the DLLs that you used when you created the
system... More often than not, either he or a MS update changed something...

> Man,it must be boring with programs that do the same thing every time.
> Where's you sense of adventure. The thrill of hunting for side
> effects that weren't there when you first debugged the program. Side
> effects caused in some code written by some one else some where in a
> library that may contain hundreds of thousands of lines of code.

I've been in this profession enough years that I don't get enjoyment ****ing
on an electrical fence multiple times... The new kids out there think that
they should use the latest thing that MS provides... They haven't been
around long enough to realize that if you insist on doing that, you will not
have a stable system... At the very least, you'll be always changing your
code to conform with the latest and (supposedly) greatest even if you luck
out and are not chasing bugs that the MS updates throw your way...

> Third party DLLs *could* be written in such a manner that they would
> not cause problems but that's not the case.

When I write DLLs, I always give a function that returns version information
associated with it just in case the file size doesn't change or someone
changes the date on the file... I also write my DLLs so that they do not
need to be registered... Same with my applications... My philosophy is that
the installation of an application should consist of dropping the
application's files into a particular directory and running it from there...
I especially don't like the MS idea of putting all the DLLs in the windows
directory... If you do it their way and two applications have DLLs by the
same name, you're likely to get screwed rather quickly... If you do it my
way, it's possible for multiple applications to have the same names for
their DLLs and more importantly, it's possible to have multiple versions of
the same application installed upon your machine...

> Plus they would make certification almost impossible as the
> program code can change without being checked against
> certification. I wonder when you refer to the Y2K mess
> if you might be referring to a very large drafting program
> where the programmers rewrote some standard DLLs?
> Update the OS, or DLL and the program would quit.
> Put the original DLL back and strange things could start
> happening to other stuff.

Nawh, most of the stuff was for in-house applications for the company for
which I was contracting to at that time... None of the applications were CAD
related... The problems that I'm referring to is related to when we knew
that a problem was going to develop... Originally, we knew that on 1/1/2000,
some systems were going to have a problem... The obvious fix for that is to
change it to a 4-digit year and we should be set for the next 8,000 years...
Yeah, it's not a permanent solution, but it's probably safe enough in that
it is rather unlikely that any program running today will still be running
in 8,000 years... Did we use this solution? No ****in' way... Instead, we
went with a date window approach so that if the 2-digit year is less than a
certain value, it refers to the 1900s and if it is greater than that value,
it refers to the 2000s... The problem arises in that there is no common
cutoff date between the various systems... For one system, a 2-digit year of
"30" might mean "1930", whereas for another system, it might mean "2030".. A
lot of these "fixes" were to 3rd party DLLs, so we might not even be sure of
when the cutoff date is going to be... Basically, we traded a known single
problem date for multiple future problem dates and we don't know when they
are either... Now, throw into this the fact that some of the DLLs are from
MS and they might get updated during various service packs and such and you
have a system where you cannot guarantee it will run as it did in your test
bed...

> Then there was VB up through I believe version 5. Write a program,
> compile it as stand alone so it contained all the necessary DLLs and
> guess what. The DLL creation date was not the original creation date,
> but the date the program was compiled. So installing that program
> would (not could) result in newer DLLs being over written by older
> DLLs with a newer creation date. Man but that one about drove me nuts
> trying to sort out.

Yeah, VB was always good for decreasing your cranial hair count... I
remember in one of the versions how they changed the way that they passed
values to DLLs... That ****ed over my DLLs rather quickly... It took awhile
before I stumbled across a couple of lines in the manual about that
change...

> What? You don't like "side effects" and here I thought they were a
> feature and not a bug. <:-)) No sense of adventure at all.

Yep, no sense of adventure... Hell, I even don't like writing the same code
twice and carry around common libraries that I've written over the years
from project to project so I don't have to waste time doing the exact same
thing again... I believe in having a single set of code that can compile on
every platform upon which the application needs to run... I don't like
having to make the same change in multiple sets of source code... Yeah, I
guess I'm just lazy... <grin>

Morgans[_2_]
October 22nd 06, 05:06 AM
"Grumman-581" > wrote

> Yeah, VB was always good for decreasing your cranial hair count

What is the VB program to which you are talking about?

Head slap to follow, most likely! <g>
--
Jim in NC

Mxsmanic
October 22nd 06, 09:20 AM
"Grumman-581" > writes:

> I've been in this profession enough years that I don't get enjoyment ****ing
> on an electrical fence multiple times... The new kids out there think that
> they should use the latest thing that MS provides... They haven't been
> around long enough to realize that if you insist on doing that, you will not
> have a stable system... At the very least, you'll be always changing your
> code to conform with the latest and (supposedly) greatest even if you luck
> out and are not chasing bugs that the MS updates throw your way...

Yes. Now multiply this by 1000 and apply it to aviation.

Problems with a desktop PC are an inconvenience. Problems with
avionics can be a deathtrap.

And unfortunately, the people building some of the latter systems have
a culture derived from the former, and that's a very bad thing.

--
Transpose mxsmanic and gmail to reach me by e-mail.

Mxsmanic
October 22nd 06, 09:22 AM
Roger (K8RI) writes:

> I know I shouldn't do this but... proprietary and safety-of-life
> software are linked not because of safety but because of "for profit".

They are also linked because of accountability and stability issues.
One reason people buy proprietary products is that the manufacturer
can be held accountable and is unlikely to have multiple,
uncontrolled, unverified versions floating around. The greater the
potential liability or revenue loss for the vendor, the more stable
and reliable the proprietary product will be.

In open source, there is neither accountability nor stability, and
there is no profit motive, either. This militates against the kind of
security that safety-of-life applications require.

--
Transpose mxsmanic and gmail to reach me by e-mail.

Bob Noel
October 22nd 06, 09:56 AM
In article >,
"Morgans" > wrote:

> > Yeah, VB was always good for decreasing your cranial hair count
>
> What is the VB program to which you are talking about?

Probably visual basic

>
> Head slap to follow, most likely! <g>

......

--
Bob Noel
Looking for a sig the
lawyers will hate

Roger (K8RI)
October 23rd 06, 12:34 AM
On Sun, 22 Oct 2006 04:56:34 -0400, Bob Noel
> wrote:

>In article >,
> "Morgans" > wrote:
>
>> > Yeah, VB was always good for decreasing your cranial hair count
>>
>> What is the VB program to which you are talking about?
>
>Probably visual basic

Yup!

I don't remember if they changed with version 5 or after version 5,
but prior to the change compiling a stand alone program some times
brought along some surprising and unwanted side effects. More than
once I had to copy the offending DLL into the directory with the VB
program and then restore the original DLL in the windows directory. It
was a royal PITA.

As a side note, you could start with about 37K of source code which is
what the greeters program ran for the EAA. That compiled into about 7
megs due to the DLLs


>
>>
>> Head slap to follow, most likely! <g>
>
>.....
Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Grumman-581[_1_]
October 23rd 06, 05:04 AM
Morgans wrote:
> What is the VB program to which you are talking about?

At that time, it was in-house developed security administration
software... The real work was done in 'C' (i.e. the various daemons,
services, and DLLs, and the gee-whiz user interfaces were in VB... It
allowed us to utilize developers who could paint pretty pictures ...
uhhh ... user interfaces ... but who were not really competent in the
technical matters of communicating with remote machines, much less the
security aspects of each of the different platforms... I gave the VB
developers a nice library interface that made adding a new system pretty
much cookie cutter for them and they put the bells and whistles in it to
make it look fancy for the users...

Grumman-581[_1_]
October 23rd 06, 05:10 AM
Roger (K8RI) wrote:
> As a side note, you could start with about 37K of source code which is
> what the greeters program ran for the EAA. That compiled into about 7
> megs due to the DLLs

VB has always been a pig... Hell, even back in the non-Windows days of
just compiled MS-BASIC it sucked... It was hidden from you by the fact
that the executable was small because it didn't contain everything that
got loaded at runtime... You had to have the BASIC runtime loaded also...

The more languages I deal with over the years, the more I appreciate
straight 'C'... It's clean, it's efficient, it's predictable...

cjcampbell
October 23rd 06, 05:27 AM
john smith wrote:
> The recent thread regarding the lack of parts for the Garmin 480 got me
> to wondering just how long the G-1000's will "live"?
> Steam gauges are forever, but integrated circuits are produced for a
> given period, then production is ceased as newer chips come along.
> Does Garmin mention anywhere how long they will support their products?
> Their earliest GPS handhelds are coming up on 20 years.
> We have seen Lowrence discontinue support for some of their products
> that are less than 10 years old.

Just like everything else that gets disontinued, either you have to
throw it away or someone starts manufacturing short runs of
discontinued parts. That is not unfeasible, by the way, for a G1000.
The big problem would be overcoming any legal obstacles thrown up by
Garmin.

The thing is, most chips and circuit boards should last for a very long
time, possibly longer than the airplane, and there will be replacement
parts from scavenged airplanes available.

Roger (K8RI)
October 23rd 06, 07:39 AM
On Mon, 23 Oct 2006 04:10:17 GMT, Grumman-581
> wrote:

>Roger (K8RI) wrote:
>> As a side note, you could start with about 37K of source code which is
>> what the greeters program ran for the EAA. That compiled into about 7
>> megs due to the DLLs
>
>VB has always been a pig... Hell, even back in the non-Windows days of
>just compiled MS-BASIC it sucked... It was hidden from you by the fact
>that the executable was small because it didn't contain everything that
>got loaded at runtime... You had to have the BASIC runtime loaded also...
>
>The more languages I deal with over the years, the more I appreciate
>straight 'C'... It's clean, it's efficient, it's predictable...

And they finally gave us the ability to turn the type checking on.
<:-)) Originally it just assumed the programmer knew what they were
doing and let you do it what ever it was. Lordy...pointers with
dynamic memory, dynamic arrays, linked lists, circular linked lists
and bidirectional linked lists. It gave the programmer a good feel
for the two words, new and free. <:-)) Oh yah, and memory leaks.

Straight C is what I wrote my first compiler in. I was the only one in
the class that wrote an input scanner using current and next state
arrays. They tell me I was only the third one in the history of the
school to do that. <:-)) Every one else used logic statements.
Compiler wasn't bad but network theory almost did me in.


Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Roger (K8RI)
October 23rd 06, 07:50 AM
On Mon, 23 Oct 2006 04:04:25 GMT, Grumman-581
> wrote:

>Morgans wrote:
>> What is the VB program to which you are talking about?
>
>At that time, it was in-house developed security administration
>software... The real work was done in 'C' (i.e. the various daemons,
>services, and DLLs, and the gee-whiz user interfaces were in VB... It
>allowed us to utilize developers who could paint pretty pictures ...
>uhhh ... user interfaces ... but who were not really competent in the
>technical matters of communicating with remote machines, much less the
>security aspects of each of the different platforms... I gave the VB
>developers a nice library interface that made adding a new system pretty
>much cookie cutter for them and they put the bells and whistles in it to
>make it look fancy for the users...

Basically (no pun intended..ah what the hell...) it was pretty much
the first of the "bloat code" generators which was followed by Delphi
and the other "visual" codes. Delphi was kind of nice to use as it
was pretty much a visual Pascal.

The most difficult thing I found about C was making sure to put in the
internal documentation so you could figure out what you did two weeks
after you wrote it. C has often been referred to as a "write only"
language.<:-))

Delphi and Visual C++ are true object oriented languages ... if the
programer knows how to use them that way. Otherwise they end up with
a bunch of source code that compiles into really big programs with a
lot of usless links and DLLs and no, or little useful inherritance.
(every thing declared global) I don't know if VB ever evolved into a
true Object oriented language or not as I've done so little
programming in the past few years. It seems like I remember that it
was pretty much object oriented.

Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Grumman-581[_1_]
October 23rd 06, 11:39 PM
Roger (K8RI) wrote:
> And they finally gave us the ability to turn the type checking on.
> <:-)) Originally it just assumed the programmer knew what they were
> doing and let you do it what ever it was. Lordy...pointers with
> dynamic memory, dynamic arrays, linked lists, circular linked lists
> and bidirectional linked lists. It gave the programmer a good feel
> for the two words, new and free. <:-)) Oh yah, and memory leaks.

Wow, expecting engineers to do it correctly... Radical concept, eh? <grin>

With regards to type checking, well, you just ran 'lint' on your code to
give yourself a better feel for it... I liked the level of strictness
with regards to type checking in standard 'C'... I hated what Ada
required us to do... More often than not, things became a system.address
type in Ada for what I was having to do...

You can be object oriented in standard 'C', it just takes the right
frame of mind... While I was working on a NASA contract for the MCC and
SSCC, we utilized this technique. Here's a write-up that I did on it
awhile back describing it:

http://grumman581.googlepages.com/object-oriented-c

We had a lot of different groups each working on different portions of
the entire system and these would be linked at a later date and expected
to work together. Collisions at link time were not acceptable.

Roger (K8RI)
October 24th 06, 12:41 AM
On Mon, 23 Oct 2006 22:39:00 GMT, Grumman-581
> wrote:

>Roger (K8RI) wrote:
>> And they finally gave us the ability to turn the type checking on.
>> <:-)) Originally it just assumed the programmer knew what they were
>> doing and let you do it what ever it was. Lordy...pointers with
>> dynamic memory, dynamic arrays, linked lists, circular linked lists
>> and bidirectional linked lists. It gave the programmer a good feel
>> for the two words, new and free. <:-)) Oh yah, and memory leaks.
>
>Wow, expecting engineers to do it correctly... Radical concept, eh? <grin>
>
Radical huh? <:-))

>With regards to type checking, well, you just ran 'lint' on your code to

This was before ANSI C and lint.

>give yourself a better feel for it... I liked the level of strictness
>with regards to type checking in standard 'C'... I hated what Ada

The original C didn't have any type checking. You could add, or
combine anything with anything regardless of type be it an address,
integer, floating point, pointer, array, string, ordinal value, what
ever. It added a new dimension to debugging<:-))

Never have worked with Ada.

>required us to do... More often than not, things became a system.address
>type in Ada for what I was having to do...
>
>You can be object oriented in standard 'C', it just takes the right
>frame of mind... While I was working on a NASA contract for the MCC and

"Object Oriented" is really a programming concept although we tend to
think of specific languages such as Delphi and C++ as being Object
Oriented. If the programmer properly organizes the language he is
using he can create the same inheritance and relationships in most
languages although being able to define a variable as local or global
makes it a tad easier. Of course global makes it easier to defeat the
whole concept too.

>SSCC, we utilized this technique. Here's a write-up that I did on it
>awhile back describing it:
>
>http://grumman581.googlepages.com/object-oriented-c
>
>We had a lot of different groups each working on different portions of
>the entire system and these would be linked at a later date and expected
>to work together. Collisions at link time were not acceptable.

You mean something like a number of modules/routines using the same
variable name defined locally and then some one assigns it global? Or
assigning a value to an address that some one else uses for something
else.? I don't know how many times I accidentally assigned global or
local wrongly. I haven't done any programming in C or even C++ in a
long time. (I've been retired 10 years now)

Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com

Grumman-581[_1_]
October 24th 06, 02:24 AM
Roger (K8RI) wrote:
> This was before ANSI C and lint.

I've had more than my share of pre-ANSI compilers... It seems that
nearly every project that I work on, at least one of the machines not
only does not have a C++ compiler for it, it doesn't have one that is
POSIX or ANSI compliant either... As such, you program for the greatest
common denominator -- standard 'C' -- so that you can have a single
piece of source code that compiles across all platforms...

> The original C didn't have any type checking. You could add, or
> combine anything with anything regardless of type be it an address,
> integer, floating point, pointer, array, string, ordinal value, what
> ever. It added a new dimension to debugging<:-))

But it *built character*... Just look at the quality of developers that
you see these days and it will readily become apparent to you that we
have a *lot* more character than they do...

> Never have worked with Ada.

You're not missing much... A language designed my committee -- and it
shows...

> "Object Oriented" is really a programming concept although we tend to
> think of specific languages such as Delphi and C++ as being Object
> Oriented.

Yeah, as I've always said, you can write crap code in *any* language...

>If the programmer properly organizes the language he is
> using he can create the same inheritance and relationships in most
> languages although being able to define a variable as local or global
> makes it a tad easier. Of course global makes it easier to defeat the
> whole concept too.

Agreed... Allowing the concept of scoping is a quite useful feature in a
language... More often than not though, most code that I've reviewed on
projects believe in basically two levels of scoping -- at the global
level and at the function level... Occasionally, you will see a
developer declare a variable in a local block of code, but it doesn't
seem to happen that often, primarily in some sort of loop counter or
accumulator... From a documentation standpoint though, it looks better
if the variables are defined at the beginning of the functions... Code
should be readable and documentation should be inline so that you can
remember *why* you were doing something a particular way when you have
to come back in a couple of years and modify the code... I like to think
that you should document the code as if you were going to be having it
published in a major publication and subject to peer review... I don't
see that happening with the developers who utilize the MS Visual C++ (or
whatever) type of products... They draw their user interfaces and plug
in the callback actions and about the only comments that you get are
whatever the MS development environment automatically includes in the
code...

> You mean something like a number of modules/routines using the same
> variable name defined locally and then some one assigns it global? Or
> assigning a value to an address that some one else uses for something
> else.? I don't know how many times I accidentally assigned global or
> local wrongly. I haven't done any programming in C or even C++ in a
> long time. (I've been retired 10 years now)

When you are developing a library, you usually have some sort of stub
executable that links in the library for testing during the initial
development... Let's say that you decide to use a variable 'x' in your
library and it needs to be declared by the main module... Let's say that
another library is also expecting a variable 'x'... If the library
routines are not expecting the variable to be modified by another
library routine, there could be issues here... If the first library
instead uses 'AAA_x' and the second library uses 'BBB_x', you've
prevented a collision at link time... Requiring the user of your library
to declare variables is just a core dump waiting to happen though...
It's better style if you have your header file for your library do the
declaring and have it look to see if something is defined before either
declaring the variable or giving a extern reference to the variable...

For example:

--- USER MODULE ---
#define DECLARE_VARS
#include "MYOBJ.h"
#undef DECLARE_VARS
--- USER MODULE ---

--- MYOBJ.h ---
#ifdef DECLARE_VARS
int MYOBJ_x;
int MYOBJ_y;
#else
extern int MYOBJ_x;
extern int MYOBJ_y;
#endif
--- MYOBJ.h ---

On the other hand, if a variable is supposed to be only global to the
modules in your library and not visible to someone linking your module
to their code, you should declare the variable with a storage class of
'static' to minimize the chance that someone could screw something up...

It all boils down to proper programming style... If you want to write
good code in 'C', you can... If you want to write crap code in C++, you
also can...

Roy Smith
October 24th 06, 04:52 AM
In article >,
Grumman-581 > wrote:

> Roger (K8RI) wrote:
> > This was before ANSI C and lint.
>
> I've had more than my share of pre-ANSI compilers... It seems that
> nearly every project that I work on, at least one of the machines not
> only does not have a C++ compiler for it, it doesn't have one that is
> POSIX or ANSI compliant either... As such, you program for the greatest
> common denominator -- standard 'C' -- so that you can have a single
> piece of source code that compiles across all platforms...
>
> > The original C didn't have any type checking. You could add, or
> > combine anything with anything regardless of type be it an address,
> > integer, floating point, pointer, array, string, ordinal value, what
> > ever. It added a new dimension to debugging<:-))
>
> But it *built character*... Just look at the quality of developers that
> you see these days and it will readily become apparent to you that we
> have a *lot* more character than they do...

Yeah. Most of them don't know which end of a soldering iron to pick up,
and wouldn't know what to do with a logic analyzer if you turned it on for
them and stuck their nose in the instruction book.

Google