On Thu, 24 Jun 2004 20:02:34 +0000, John Theune wrote:
This all being said, I work with a medical application that runs on
windows and we have had a lot of machines running 4.0 and our app ( and
nothing else ) that have run for very long periods of time 7x24. I think
our record is 1 year and it did not crash, we rebooted it to load a
newer version of the app. On the other hand I've had a workstation
This is actually a gray area. Having a long update is generally not a
problem for just about any OS. The problems occur when the system is
under load for extended periods of time. While uptimes of a year for NT
systems are not unheard of, they are a generally considered a minority.
In fact, most companies with serious business applications running on a MS
platform, generally make rebooting the systems part of their required
maintenance. Most companies attempt a reboot ranging from weekly to
bimonthly schedules. The reasons generally range from stability, latency
issues, memory leaks, locked resources (because of the predominately
threaded environment), and overall degraded performance (generally
associated with memory fragmentation).
For many commercial Unix systems, uptime is commonly measured in years
(multiples). For many highend IBM systems, stories of machines running
for 15-20+ years, non-stop, are not unheard of. And then, those systems
lost power only because they were decommissioned.
If you don't mind me asking, what does your medical application do?
boom in the night. Macs have had a much better rep for stability
because Apple laid down the law as to what could be done in terms of
hardware and software. Apple may have had a more stable system but
Dos/NT/Intel took over the world.
Keep in mind that early macs did not have MMUs (Memory Management Units),
which is what provided protection for one process against another.
Likewise, it's also what prevents OS corruption from applications bugs.
Just rambling on I guess...
Cheers.