Except that for no good reason they also prevent you from moving or
renaming the file. No justification for that.
I expect you write the new file and delete the old one, then rename (or
some sequence). The old file will not actually be deleted (as in, space
freed) until the last program using it stops doing so, so there are no
longer any open file handles on it. Not sure of the details but I think
the delete also renames it to either a null file name of some flavour
of illegal one so no new program can open the old one. This means that
20 progs could be using the old one for some time. If they are
restarted they get the new library.
"That excessive bail ought not to be required, nor excessive fines imposed,
nor cruel and unusual punishments inflicted" -- Bill of Rights 1689
On Mon, 03 Aug 2015 21:24:58 +0100, Tim Streater wrote:
No, there just isn't a file name at that point. Directory entry points to
the inode. The inode describes the file.
When the file is replaced, the directory entry points to the new inode.
The 'use count' in the old inode drops to zero (this is 'use' in the
sense of the number of directory entries pointing to it, not now many
users of the file there are).
When the *user count* of that inode drops to zero, then the file is
deleted and the inode is freed.
Neither does TNP or he would say how.
One way would be to send a kill to restart the process but that doesn't
fit with never doing restarts.
Of course you could use the kernel thread locking to lock the kernel so
no user programs are actually using it and then update the bits in the
kernel which will work as long as no functional changes are made, just
bug fixes. It won't work if the code is part of the lock handling
though, so if you find a bug there you are stuffed.
On 31/07/2015 22:09, The Natural Philosopher wrote:
My colleagues tell me that Linux is much better than Windows because
when you get a kernel update you don't have to reboot the machine when
you install it.
They've failed to explain when it starts being used.
The point being made by the link I posted is simple. Windows cannot
overwrite a library file that is 'in use' by a running program.
Its not a question of the fact that that program wont get the update
until it shuts down and restarts, its that the update cannot take place
while it is running.
Its one of those windows 'features' that didt make any difference back
in the day when updates came on a set of floppy disks once every 2 years
and took and hour to apply.
Its the same with the awful disk systems and algorithms. de fragging
used to be someting you do once every 5 years, now it is needed almost
annually or worse, simply because the way the disk layouts work is
Unix was designed for multi tasking and multi users in a busy
environment where one of the more important things was that you didn't
take a machine with hundreds of users on it down unless you had to.
The internet was mostly BUILT out of Unix computers. Networking is in
Linux as 'son of Unix' took all the best features off it, and reverse
engineered them. The result is that while Bill Gates and Steve Bullmer
were busy adding chrome and tailfins to a dung cart., Linus Torvalds and
the professionals were busy firstly making sure they had a totally
reliable chassis, and then adding just enough of a dashboard and
controls to drive it.
Sure the 'user experience' lagged windows - but the reliability of *nix
platforms and the basic speed and efficiency of them was never in doubt,
which is why apart from a few Windows desktop and laptop users, a few
vanishing Symbian users and other legacy kit, and CISCOS IOS (and some
dedicated low level OSes used by real time hardware on very small chips
everyone else is using a *nix OS whether they realise it or not. And
that includes all Macs post OS9 and all android devices.
There never was a 'year of unix' or 'the linux breakthrought'
What has happened instead is that the world has wherever possible not
used Microsoft, because it costs and it runs like diarrhoea, but instead
used a *nix derivative: As memory cost plummeted it simply became easier
to stock in enough memory to run a more or less full *nix system even on
a tuppeny ha'penny ARM chipset, which nonetheless probably has more
processing power than an IBM mainframe of the 1970s...
My point is that *nix and Linux are the professionally engineered highly
developed reliable ubiquitous operating systems in use on nearly all new
The only exception to that is desktops and laptops, where there is a
huge installed base of utter crap that has to be allowed for: And
that's why the abortion called Windows 10 is on offer.
BUT - and its a huge but - the desktop and even laptop market is
collapsing for domestic and consumer users. Fondleslabs, mainly running
android (based on Linux as it happens) are taking over.
Only offices are really still buying PC's where they need to still run
specialised windows apps.
But as each year passes the number of desktops goes down, and the number
of free open source programs that do everything that paid for programs
used to do, as well as, or in many cases better than, is increasing.
Windows strength was always the GUI - it wasn't great, but it was better
than X-windows used to be...simply because the driving force of
Microsoft was in that area, not in the fundamental sound engineering
practice of 'what lies beneath'. OSX built on BSD Unix with a pretty
decent GUI and that of course now exists, along with a decent
developmnent toolkit to allow 3rd party apps developers to port to it.
Linux desktop design took a huge leap forward with the gnome project, as
typified by Ubuntu, and that finally - in my opinion - overtook Windows
in terms of sheer usability in the shape of MATE and Cinnamon - both
being driven by a desire to recreate the main features of Windows XP and
OSX , but better.
And that's Linux Mint: optimised for easy transition from XP or windows
Vista, and unbelievably easy to install.
The reality is that Windows and Microsoft are now retreating into a
niche market: Legacy desktops. They lost the embedded market, the real
time market and the mobile market to *nix and derivatives thereof. They
lost the server market years ago - they were never really there - and
they never were in the mini/mainframe arena - that's all Linux these
days as well.
If Microsoft had any sense they would have bitten the bullet the way
Apple did, and produced a new version of Windows as a desktop app and
GUI running on Linux/Unix.
They failed. They could have ported MS office to *nix. They failed.
The future will be *nix, simply because everybody who makes hardware or
writes apps wants a platform that doesn't owe anything to a monster of a
marketing company that dictates what you can do.
Its in everyones interest but Microsoft's to contribute and spend
billions developing a common operating system that is license free, and
that's what they have done. The list of companies actively supporting
linux development is vast.
• Who is Writing Linux?
o Every Linux kernel is being developed by nearly 1,000 developers
working for more than 200 different corporations. This is the foundation
for the largest distributed software development project in the world.
o Since 2008, the number of individual developers has increased by 10
percent, reflecting the ubiquity of Linux across industries.
• Who is Sponsoring Linux?
o More than 70 percent of total contributions to the kernel come from
developers working at a range of companies including Red Hat, IBM,
Novell, Intel, Oracle, Fujitsu, among many others. These companies, and
many others, find that by improving the kernel they have a competitive
edge in their markets.
o Red Hat, Google, Novell, Intel and IBM top the list of companies that
employ developers who are reviewing and approving Linux development.
• How Fast is Linux Developed and Released?
o A net of 2.7 million lines of code have been added since April 2008.
o An average of 10,923 lines of code are added a day, representing a
rate of change larger than any other public software project of any size.
o An average of 5,547 lines are removed every day, ensuring that the
code is high quality and relevant for the most important implementations
of the kernel. "
Linux isn't a nerdy geeky amateur operating system, Its a professionally
written and supported massive engineering project in progress. No one
owns it, but everybody contributes and everybody benefits.
The only reason its not on the desktop more than it is, is because
Microsoft still charges enough and allows manufacturers to charge for
installing it on every PC you buy retail by and large.
And those application developers who make money out of selling software
still want a platform that is of that model: But they are shrinking too
- the new model is either 'software as a service' - as a cloud app that
doesn't need MS to access it or 'free code, paid support' for
Which is the Red Hat model, and largely the IBM model.
It doesn't matter how much we argue about the merits of Windows versus
linux. The facts are what the facts are, and I doubt that Microsft will
exist in its current form in ten years time, or indeed windows.
It faces the same difficult choice that IBM faced in the 70's and 80s
when it had to recognise that by and large its real business was
supporting large businesses in large application design and support: It
make a lot less hardware than it used to, and it runs Linux more than
any other OS.
The trouble is that MS has run out of things to add to Office that make
sense. Its stuck with a creaky OS that is 20 years out of date and not
fit for the internet. It still cant decide whether its in the
professional or the consumer market, and is in danger of losing both.
Apple decided where it stood. Fashionable consumer hardware and high end
workstation. Not operating systems.
Honestly if I were running Microsoft I'd probably leave.
New Socialism consists essentially in being seen to have your heart in
the right place whilst your head is in the clouds and your hand is in
On 03/08/2015 09:49, The Natural Philosopher wrote:
Linux can't either, if it did then the program could start executing
random code if it had to drag in a page from the library that you have
At best linux can add another library to the system and the program may
start to use that library when it is restarted.
If you can do as you claim it would be one hell of a security hole.
You never need to defrag NTFS, you know the one that was introduced
about 15 years ago.
Odd that we used to have to defrag the Plexus machines we had >15 years
ago. Of course it wasn't called defrag, it was called testing the
backups and you used to reload the backup which removed the
fragmentation from the drive. You used to test the backup whenever the
machines started to slow down.
When it consisted of three machines that was true.
Networking came much later than Unix.
We weren't, System X ran on a real-time OS not unix but I did add a unix
SVr5 subsystem onto all the exchanges to manage the billing and
communications with the backend offices.
Not yet, probably never.
Only since it became free as Unix was far more expensive than windows.
My point is that Unix was a professionally engineered and *expensive* OS.
Linux came along and destroyed the Unix market just like you want it to
Snip more windows bashing based on irrelevant personal views (not
Unclear. I first saw unix running on a PDP-11/45 at DEC Western
Research Lab (Palo Alto) in 1977. Earlier than that, at CERN, we had
been building networks, based on our own hardware and software and
using coax cable. These were point to point links up to a few km and
running (for the shorter links) at up to 5Mbps.
Xerox produced XNS, which is what the Altos and Stars used over early
ethernet in the early 80s. AIUI, that might have become the wider
networking standard except that Xerox refused to release the specs for
some of the higher networking layers. Also by this time the unix boys
were busy creating IP, which then took over from XNS because it was
free and available with unix, and people had started writing IP stacks
for other machines, such as VAXes and some IBM systems.
But mail and file transfer had been going on using ad-hoc methods
anyway for some years.
"If you're not able to ask questions and deal with the answers without feeling
that someone has called your intelligence or competence into question, don't
It was, initially 3Mb/s, referred to as "research Ethernet" at Xerox. But
what protocol(s) ran over it were a different matter. At one time there
were a number of competing ones; X25, XNS, IPX/SPX (which was related
to XNS), AppleTalk, DECNet. Mostly gone now & replaced by TCP/IP.
Today is Setting Orange, the 69th day of Confusion in the YOLD 3181
I don't have an attitude problem.
I know what x25 is, I designed the hardware and wrote the firmware for
the x25 card used on System X before I designed the whole thing out in
favour of a Unix system and networking a few years later..
It wasn't x25 and your lack of knowledge about the early days of
networking is only matched by your current lack.
HomeOwnersHub.com is a website for homeowners and building and maintenance pros. It is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.