Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Linux was first released to the world from here 17.9.1991 (funet.fi)
181 points by K33P4D on March 19, 2020 | hide | past | favorite | 106 comments


I was using Linux before things like Slackware came about, when it was just a boot and a root floppy disk. We had DECstation 3100/5000 machines costing small fortunes, that couldn’t reliably write a CD. The small 386 in the corner running “that newfangled thing” was far better at this :)

In my lifetime, I’ve gone through:

- building (as in: soldering chips to a motherboard) my own computer at home at age 11*

- buying an 8-bit Atari and learning about Antic and the 6502

- eventually getting a “disk drive” which stored an entire 128k

- moving onto a 32-bit cpu with the Atari ST

- blowing my student budget for the term on a hard disk, 20MB

- finally getting connected, at the blazing speed of 2048 baud

- lather, upgrade, rinse, repeat

- to where I have a 1gbit internet connection, a 10gbit home network, 100TB of storage locally, and a server rack in the garage with more than 100 “cores” available.

Things have changed so much, so quickly, relatively speaking.


[This is the ‘*’ from above. I couldn’t get ‘edit’ to accept it as an update, it just kept on putting the original text back, so...]

My parents bought me a black-and-white TV and a computer kit (they couldn’t afford both the TV and the already-assembled version) for Xmas at age 11. The big gift here was the TV, as far as I was concerned, we only had the one downstairs before that, and I got one in my bedroom! I even convinced myself I could watch snooker on a black & white TV, even if you did have to tune it by turning a dial until the picture appeared :)

About a month later, they were getting at me to put the computer together, which was the main present in their eyes - the TV was second-hand. Grumbling, I did so, and got it working. Once I’d told them, the whole family wanted to see this new technological marvel, so I took it downstairs, plugged it into the main TV, and everyone gathered around.

I typed in what the manual had told me to do, to test things out

  PRINT 2+2=4
To which it displayed “1”. And I turned round to the family expecting all the validation an 11 year old desired. My dad looked at me, looked at the screen, and just said “I knew it, you’ve buggered it” and walked out the room.

It took me a few days to convince him that “1” was the right answer. To this day, I think his mistrust in computers stems from that episode. He was a docker in a Northern city, and all he’d say for years afterwards was “you can’t trust those bloody things” in ... more colourful... language.


This was a beautiful little story! I had a big smile. Thanks for sharing.


I remember we had an older Compaq tower server that had a Pentium 60MHz chip AND a SCSI card. It was the the perfect device to burn a CD with and even it messed up on occasion. I can't imagine doing them on a 386. It wasn't until the Pentium II days when we could reliably burn a CD and still use the computer for normal tasks without having a buffer underrun.


Oh it was pretty much a dedicated machine when it was burning a CD. This was a postgrad office, there were 4 of us (three called Simon...) and we had a Unix workstation each.

Generally the PC sat in the corner and wasn’t really used. It had a SCSI card too, and when it was burning CDs it was left alone.

I remember one day a colleague of mine burnt 50 CDs so he could give them out after a -resentation, it was pretty damn reliable. I also used it as a mastering machine for a CD that I had professionally duplicated to sell, full of Atari ST shareware/public domain s/w.

These were very early days of Linux. I actually had already released the “Mint distribution kit” which let my beloved Atari ST work like the Unix machine I had at college, and this was before any sort of distribution for Linux (at the time, Slackware had yet to be released) was available. The MDK was quite popular, mainly amongst students I think, but of course paled into insignificance compared to what Linux/Slackware/all-the-rest would evolve into :)


> there were 4 of us (three called Simon...)

I see your WWII-era parents got what I call the Alastair Memo, which stated that at least 50% of the boys born around 1955-1970 were required to be named Simon, Alastair, or Nigel.


Funnily enough, my brother is called Nigel... I think you might be onto something!


I’m shocked


That's nice but it's hard to beat - typewriter as a gift in ones teens to flying in metal cages, to going to the moon, to VR headsets.

I remember reading opinion polls from people who saw this rapid rate of progress in the 60s 70s 80s and they all assumed the 2000's would be this "flying cars everywhere" magical land.


What are you doing with 100 cores in your garage?


Generally FPGA place and route, and more recently the same for ASICs. I don't recommend making your own ASIC as a hobby. It's damn expensive.


You're doing your own ASIC? Very nice. I'd love to talk that over if you'd care to be in touch. (Contact details in profile).


I'm ridiculously busy right now, and I don't have the time - sorry. What I will do is point you to the resources I've been using:

http://opencircuitdesign.com is the primary resource, which gives you layout, design, proofing, and setlist generation.

I'm not aiming for anything even remotely state-of-the-art, I'm looking at a 180nm design and even that might be pushing the hobby funds. You can get "shuttle service" at various places to share a wafer, or you can use efabless (link on the magic page above) to do a lot of the work for you, at additional cost.

The guy who for years wrote and maintained Magic (the layout tool) now works for efabless, and he's a great guy - especially if you submit patches to him :)

It's a lot of hard work, you have to worry about all sorts of things you can take for granted in an FPGA (clock routing, i/o bonding and pad designs, oscillators for clock multiplication etc. etc. and yes etc. again). But there's not many people who can say they taught themselves how to make an ASIC :)


Thanks. Those are great pointers!

And yes, many kudos for teaching yourself how to make an ASIC, that is very cool :-)

I have a long way to go to get there.

I've been out of the open source silicon field for a while but want to get back in. My side project is building a small-scale factory for custom ASICs, rather than getting them made in another factory. It's a very interesting problem, and quite different from ASIC design issues since a lot of it is physics, and obviously there are many factors that are different on a small scale.

The goal is an open source silicon service to the extent of making it relatively affordable for others to iterate and reuse designs, in the hope of developing a thriving scene much like happened with open source software, GNU/Linux etc. But it is proving hard to find the time these days. And as you say, it's expensive, no matter how you go about it, even though affordability is a goal of the final service.

Thanks again, and good luck.


I have more than twice that in my garage for image/video/3D rendering.


2048, or 2400?


The Linux kernel has been the greatest example of a global network of programmers contributing in the open and it has shown that open-source software with the GPL works for the contributors and for companies relying on Linux in production environments. I see most of the FAANG companies (Except Apple obviously) have at least contributed in some way, which is interesting to see.

For server-side environments or to some extent Android, I can see those as reasons for companies to contribute patches but I'm not sure for the future of several individual distros which are still floating around today, even when I rarely switch between macOS and Ubuntu these days..


I was under the impression that even Apple used linux for its production web environments.


Well 'using' Linux in production is quite different to actually contributing to the development to it.

From the FAANMG group of companies: Facebook, Amazon, Netflix, Microsoft and Google all have employees signing off patches under their company emails and no-one should be surprised to see no contributions from Apple to the Linux kernel anyway.


> to see no contributions from Apple to the Linux kernel anyway

There is one person submitting patches to the linux kernel under an @apple.com address and a few more listed as having reported them. That's as of the last time I pulled the kernel repo back in November.


Which, compared to any company of its size, is essentially negligible.


> no-one should be surprised to see no contributions from Apple to the Linux kernel anyway

Why not? I know Apple is generally looked as a closed down company, but I'm still surprised that a company with their caliber, capital, resources and engineers can't find the time to help out the community that is helping them.


As someone who works for a company where the IP lawyers own your soul and constantly remind you of the fact, I would not be surprised that Apple contributions to the Linux kernel are few and far between.

Don't worry, the Apple thinktroopers will catch up with those people and then they'll probably be free to contribute to any free software they want in their now-limitless leisure time.


Because in Apple's case they are merely using Linux as off the shelf server software to serve a website or maybe files in their HQ. They have no need to modify Linux therefor they have no need to create patches. Apple doesn't even contribute to FreeBSD despite lifting many components for OSX AFIK.

The others have deep investment in Linux and therefore contribute patches.


AFAIK Apple favors FreeBSD and NetBSD (the latter used for the Airport routers). I wouldn’t be surprised if they have contributed code to those projects.


Why would they care enough to make an exception to their already strict open-source policy to contribute back to GPL licensed software?


I believe Apple also uses Linux to bring up their chips.


Has it really shown that, or is it just survivorship bias? How many open source projects have failed despite having all those attributes?


I don't know why I feel compelled to mention it, but 1791991 is a prime number.


I was born 17.09.1990, exactly a year off. I grew up with Linux and am eternally grateful to the "scene" surrounding the magazine CDs with - I don't know - like 17 distros packed in together and endless tutorials on each of them.

It's also amusing how many people - my age or so - use Ubuntu as a daily driver these days that never went through the pain of configuring LILO or Broadcom drivers from source in Slackware ;)


I started using Linux in around 94/95. I heard some people say how cool Linux was. So after numerous attempts I got it installed and was booted to a command prompt and asked "What the fuck is so cool about this?"

I would turn out to be love at first sight. I've been using Linux since and I've been working at SUSE for 10 years this Fall.


I remember going to a Linux meetup in downtown Seattle in about the spring of 1993. I was surprised at the large number who attended, probably a couple hundred.


That's pretty awesome.


> the pain of configuring LILO

LI


I once made a patched LILO that would print LOL instead of LIL when the second stage bootloader barfed.


Yeah - I recall that the number of characters printed would tell you where it failed.


Thank you, this just made my day. :)


Don't forget xf86config and other X11 fun to make it work with monitors, before peoples shared settings for popular models.


Yeah - And I bet you also remember those warnings about configuring your monitor wrong could physically destroy it. Fun times! :-)


How did you realize that?


I wanted to verify it too, if you Google '1791991 prime', the search page answers 'Yes'.


Yeah, I indeed Googled it [0]. Wolfram Alpha is also a quick way to verify, just query the number:

https://www.wolframalpha.com/input/?i=1791991

[0] I thought to check because I instantly knew the number was not divisible by 3 (since 1+7+1+1 % 3 = 1), and at that point it's just quicker to look it up than run through the other primes up to sqrt(1791991).


Just use the "factor" program from the command line:

$ factor 1791991

1791991: 1791991

factor prints the prime factors of each specified integer.


Indeed! At least, for systems with GNU Coreutils.


So is 9171991.


I remember my first Linux distro, Monkey Linux, downloaded from the BBS. It fits on 5 floppy disks, with XFree86. You have to use `arj` to extract it, an alternative to `pkunzip` during that time. (http://www.ibiblio.org/pub/historic-linux/distributions/monk...).

SuSE invented LiveCD, AFAIR, years before Knoppix claims to be the first LiveCD Linux in the late 90s.


SuSE invented LiveCD, AFAIR, years before Knoppix claims to be the first LiveCD Linux in the late 90s.

Yggdrasil Linux was earlier:

https://en.wikipedia.org/wiki/Yggdrasil_Linux/GNU/X


Yggdrasil was amazing. It single-handedly helped me sell Linux to many a grizzled greybeard, who couldn't believe that this toy kernel would amount to anything .. all I had to do was boot the CD and show them a working X-terminal a few minutes later, and that was all it took: I spent days copying those CD's for the entire team.


Mandrake, Slackware, then Gentoo here.

Curious, do you remember when user contributable/rolling package managers became popular? Back in 2000 when I wanted latest software, I remember having to resolve dependencies manually (view compile errors, then yahoo/google for libraries and errors). Each dependency had to be compiled manually, sometimes requiring patching code to get things to play nice. This was always a headache lol, but felt awesome once things actually compiled.


Debian is the first distro to allows volunteer to maintain packages. Then Ubuntu releases Hoary and gives invitation to be a MOTU maintainer with hangout channel in FreeNode, later on, Mercurial was released and Launchpad.net was created during Dapper days at the time where GIT is not yet mainstream, which gives way to PPA packages, then after a year, ArchLinux gains popularity, which allows anyone to submit packages using `yaourt`, a custom package manager on top of `pacman`.


We used Linux in a product before it had a network stack. We did need networking, and used KA9Q to do it. We also used rz / sz (zmodem protocol) for file transfer over phone lines. I recently integrated zmodem into an embedded system for firmware updates over a serial port- it's still very useful.

The performance of the floppy drive was terrible at first- a friend of mine and I added buffering, so that it could read a track at a time instead of block at time (which caused it to read only one block per disk rotation).

For the same project my friend created the generic SCSI driver that still exists today. It allowed us to connect a medical film scanner to Linux.


A lot of quality photos of old maps in one of the links: http://ftp.funet.fi/pub/sci/geo/carto/maakirjakartat/

For example a book of old maps dated to 1643.


I thought of a fun exercise: Take the original Linux kernel, study it and slowly build upon it without looking at the current Linux kernel.

My thinking is that it could be quite interesting to see how different people evolve the kernel - maybe some interesting ideas result.


I've always wondered what Torvalds's answer would be if you asked him if he had the chance to snap his fingers and have the kernel rewritten from scratch what he would change?


Whatever it is, I hope the answer involves “implement kevent(2) instead of creating epoll(2)”.


Add an 'e' to the creat() call?


Not much scope for that, it's POSIX (and Ken Thompson's "fault" https://stackoverflow.com/questions/8390979/)


That sounds fun, but a very bad use of time.

Linus' monolithic kernel won out over Tanenbaum's microkernel, because it just worked. In the 1990s that was important.

Now we want it to work and not get totally pwned because we opened a sketchy email attachment before we had our coffee. Tanenbaum was right [1], but for the wrong reasons, and way too early.

[1] https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...


Where would one find the original kernel?



This tarball seems to include the original kernel sources: http://ftp.funet.fi/pub/linux/kernel/Historic/linux-0.01.tar...

The release notes for 0.01 are here: http://ftp.funet.fi/pub/linux/kernel/Historic/old-versions/R...


This repo claims to have the original: https://github.com/zavg/linux-0.01


Seems like a waste of time.

Like if you want to have a look at a different kernel designed recently:

https://fuchsia.dev/fuchsia-src/concepts


I wouldn't consider it a waste, the changes and history behind those changes to the design seem interesting to me.


I have fond memories of funit.fi, and it wasn't because of the Linux kernels being released there ;-)


What was it then? :)


ASCII graphics of girls who would probably be pretty chilly dressed like that in the Finnish winter.


Ah, the memories of ordering distro bundles from cheapbytes and gleefully experimenting in my “test lab” (aka bedroom). It’s hard to quantify how liberating the early Linux days felt after a slow and painful indoctrination to computing in the Windows world.


When I was 15-16, me and a friend ordered CD-ROMs from CheapBytes and sold them in The Netherlands (with a label applied to cover the CheapBytes branding). Most people didn't have credit cards in the 90s, so this scheme worked well enough to earn us some additional 'pocket money'. The website is forever preserved by Tripod (at some point we stopped, but never removed the website):

http://linuxlop.tripod.com

We did get some complaints from people who were convinced that selling Linux distributions was illegal ;).


We did get some complaints from people who were convinced that selling Linux distributions was illegal

I was a C.S. student at UNC-Wilmington when RedHat first launched, and I remember a lot of people were freaked out about that. They were just howling "they're SELLING Linux?!???!" Heh.


Love this.


One of the most interesting folder on Funet archive is mirror of Simtel[0] FTP-serever[1]: there are a lot of very interesting software for various old platforms.

Especially interesting — there are some very cool CAD, GIS and graphics apps or MS-DOS, Windows (from 3.1 to XP).[2]

[0] https://en.wikipedia.org/wiki/Simtel

[1] ftp://ftp.funet.fi/pub/Archived/simtel.net/pub/simtelnet/

[2] https://twitter.com/app4soft/status/1240002398577397772



NO! You linked wrong directory!

Here are correct HTTPS link to Simtelnet mirror.[0]

Also just discovered that Simtelnet mirror on Sunet is more complete.[1]

[0] https://www.nic.funet.fi/pub/mirrors/Archived/simtel.net/pub...

[1] https://ftp.sunet.se/mirror/archive/ftp.sunet.se/pub/simteln...


Ah yeah, the good old days. Yggdrasil Linux, Turbo Linux, Slackware, etc. I think I installed my first Linux system in 1996 (maybe 1997) and never looked back. I didn't drop all use of OS/2 and Windows immediately, but by 2001 I had adopted Linux as my fulltime desktop OS and to this day it's all I use, outside of situations ($DAYJOB mostly) where I'm required to use something else.

I wonder if Linus had any idea of the impact his "toy project" would have on the world?


I remember buying "Linux - unleashing the workstation in your PC" circa '94 which had the tag line "friends don't let friends use DOS !". I was quite stunned at how much better it was than DOS/Windows at the time. I've been a Linux user ever since!


The disk distributed with that book was linux universe. For me, it was the first easy to use linux distribution.


These specs, and runs over HTTP.. and is likely written in VIM, in unformatted HTML.

> It runs on a Linux server with dual 20 core processors, 786GB of memory and 80+TB of NetApp NFS storage. > It has a 2 x 25Gbit/s connection to the Funet backbone.


Got my first copy of Linux (Debian Potato!) in early 2002. I'd known about it for years thanks to second-hand computer magazines and the excellent (in retrospect) ZDTV, and it definitely lived up to the hype.


ZDTV was amazing. First I learned about Linux was from "The ScreenSavers". Loved that show.

I think my first functioning Linux install was Debian Sarge in maybe 2005. I remember the first time I got it to boot and it was just the command line that I had working, but it felt like freaking magic.

Those were the days too of Compiz and all that fun. It was mind blowing to realize that there was an alternative to Windows that not only looked fancier but had free access to things like compilers and interpreters. Definitely started me down the path to my current programming career.


This is cool. I remember my first distro was Mint like most. My story if curious: https://craignuzzo.tech/my-linux-life-journey/


I'm wondering why they need "dual 20 core processors, 786GB of memory" to serve FTP?


FTP is not the only service provided, see http://ftp.funet.fi/README


Also see [1]

Very old (inter)network, also ran an IRC server back in the days. IRCnet IIRC

[1] https://en.wikipedia.org/wiki/FUNET


This does not answer my question, I've read the site and the README: what do you see there that I didn't see?


My guess is that they're mostly optimizing to minimize impact on other operations and effort needed to host this - 80 TB storage from shared system might be nothing compared to needs of scientific computing, but letting requests directly there might be too much - having lots of cache at frontends will take care of that, and dual processors are needed to have that much memory.

And dual frontends will give nice redundancy.


Thanks, this is pretty much the only real answer to my serious question here


They can serve a lot of users simultaneously. Most of that RAM will be file cache.


Yup, the niche science site may have a big working set for a single server to handle a worldwide random workload, but it is still cost effective versus trying to pay for a CDN that will have such low user density.

The conventional science community approach is volunteer mirror sites. We could have many benefits of a CDN without the big recurring costs.


You may as well ask why big systems are needed for anything. You can run Postgres on a 10 year old laptop, but you can't run Postgres on a 10 year old laptop as a backend for the USPS.


An SQL database workload I would understand, but an FTP seems mostly filesystem/io not memory and cpu bound. If you have more information, I'd be happy to have the details.


I didn't know USPS had such high requirements. Where can I learn more about this?


Operated by csc.fi, so it's perhaps just the standard server for a science oriented org. Or a maybe a "hand me down" from them.


So I actually live (and am typing this) about 5 minutes walk from the CSC head office in Keilaranta, which I can see out of my window. I wonder if the machine is physically located there.

(I'm pretty sure my first linux kernel download was 1997, one of the 2.0-pre series, and it probably came from funet.fi)


It has less to do with capacity required for the task than it has to do with justifying the IT manager's budget and general dickwavery. Although, if IT really needed to justify its budget, today it'd be talking about how many AWS nodes they use and how big their Kubernetes cluster is. To serve FTP.


what does 17.9.1991 mean? Can I suggest people stop coming up with their novel date formatting schemes and use a standard that universally makes sense? ISO 8601 is worth looking at:

https://xkcd.com/1179/


It's hardly novel. This ordering goes back centuries, and is used by the majority of the world.

The ISO standard is great for data exchange, but it's unlikely to change how dates are normally spoken and casually written.


You're technically correct. However, to a large number of people it not only appears novel, it is ambiguous for ~40% of dates of the year with their native format (MM/DD/YYYY). Even if it's convenient for the majority, it may cause so much friction for the minority that it's not worth using anymore.

My native format is mdy but I dislike it almost as much as dmy.


edit is not working. I meant my native format is mdy.


Day month year is a common date format in much of the world. I think they should write the year as 9191 for consistency but I’m not willing to die on that hill.


I agree, but 17 being greater than 12 and the year being 4 digits reveals the order in this case. This only works sometimes, though. Like how 14 o'clock is pretty clear, but because not everyone uses 24 hour time yet, 2 o'clock is actually ambiguous. I find a leading 0 helps, but in spoken conversation it's not so simple.


Where is fourteen-o-clock common? I've never heard 24 hour time spoken that way.


Not in English, but the direct translation is used in other European languages.

Danish: klokken 14

German: 14 Uhr

That does mean native speakers of languages like this might say "14 o'clock" when speaking English.

I very occasionally use this if I need to be certain to be understood by someone who rarely speaks English, "we'll arrive at 19 o'clock". (A friend's parents in rural rest-of-Europe, for example.)


This is the most commonly used format in the world an funnily enough it is not listed as discouraged in the xkcd :-)


It's the third format from the left on the top row.


No it's not. The one in the title has dots as separators and uses single digit for first 9 months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: