Back in the late 90s, when I first entered the video game industry to work (when it was quite scruffy, countercultural, and populated by some pretty odd people), one of the first things I encountered was a new co-worker who, next to his giant tower of used Mountain Dew cans, had a black and white TV in his cubicle. This struck me as very odd at that moment in time - as I understood things, obviously the point of work was supposed to be that it was a place where you worked, not a place where you watched TV. (Now, granted, everyone else was playing the recently released Diablo on their work PCs during lunch in network mode, and we were a game studio after all, so my reaction wasn't totally coherent). Still, no one else had a TV, and that guy was young and single with no work-life balance, he was a recent transplant, and it still seemed unusual at the time.
Fast forward 28 years later, and now everyone has an amazing TV in their pocket at all times when they commute, sit in their work space, go out for coffee or lunch, or go sit down in the bathroom, all with a near infinite collection of video via youtube, netflix, and even massive amounts of porn. How little did I know. And that's to say nothing of texting and twitter and reddit and instant messaging and discord and ...
Several years ago, I was working on a college campus, and there were giant corporate-flavored murals beside some of the city blocks students walked, full of happy multicultural clip art people and exciting innovative technological innovation, and adorned with the message, "Imagine a borderless world!" Clearly that message was meant to be rhetorical, not a call to reflection, critique, or reevaluation. There did not seem to be the suggestion that one might imagine the borderless world and then, having done so, decide it was a problem to be corrected.
I wonder a lot, these days, if we're not deep into a Chesterton's Fence situation, where we have to rediscover the hard way the older wisdom about having separate spheres with separate hard constraints and boundaries on behaviors, communities, and communication pathways to facilitate all sorts of important activities that simply don't happen otherwise - something like borders and boundaries as a crucial social technology, specifically about directing attention productively. Phones and tablets are, in their own Turing complete way, portals to a borderless world that pierces the older intentional classroom boundaries.
In my first job out of university in the 80s, I spent all one night playing Knight Lore on the Spectrum with friends. I failed to get up the next morning. My boss drove across Leeds and to bang on the door to see if I was alright. I needed that job so I stopped playing computer games.
In the 90s a later boss called me out for spending my days attached to the Slashdot firehose. I had sort-of known that it was a wasteful time sink, so I resolved to completely stop using the social media of its time, and have avoided most incarnations of it ever since (but here I am).
As a scouter working with teenagers, I feel that most kids with a supportive backgrounds will tame this beast for themselves eventually, so I hate to make hard "no phones" rules. I would rather they come to terms with this addiction for themselves. I know that some simply won't finish school without strong guidance, but delaying exposure to this might just be worse in the long term.
> As a scouter working with teenagers, I feel that most kids with a supportive backgrounds will tame this beast for themselves eventually, so I hate to make hard "no phones" rules.
In my experience with mentoring juniors and college students, it’s common to have some wake-up call moment(s) where they realize their phone use is something that needs to be moderated. For some it comes from getting bad grades in a class (college in the age range I worked with) and realizing they could have avoided it by paying attention in lectures instead of using their phone. I’ve also seen it happen in relationships where they realize one day that their social life has disappeared or, in extreme cases, get dumped for being too into their phone. For others it shows up in their first job when someone doesn’t hold back in chewing them out for excessive or inappropriate phone use.
In the context of high school students, I don’t see this happening as much. A big component of high school social structure is forcing students a little bit out of their comfort zone so they can discover friends and build relationships. The default for many is to hide, withdraw, and avoid anything slightly uncomfortable. For a lot of them, slightly uncomfortable might be as simple as having to make casual conversation with people around them. A phone is the perfect tool to withdraw and appear busy, which feels like a free license to exist in a space alone without looking awkward.
So while agree that most people come to terms with the problem themselves as adults, I do also think that middle and high schools deserve some extra boundaries to get the ball rolling on learning how to exist without a phone. The students I’ve worked with who came from high schools that banned phones (private, usually, at least in the past) are so much better equipped to socialize and moderate their phone use. Before anyone claims socioeconomic factors, private high schools generally have sliding scale tuition and a large percentage of students attend for free due to their parents’ income, so it’s not just wealthy kids from wealthy families that I’m talking about.
> I feel that most kids with a supportive backgrounds will tame this beast for themselves eventually, so I hate to make hard "no phones" rules. I would rather they come to terms with this addiction for themselves
That approach doesn’t work so well for people with drug and alcohol addictions/dependancies.
> That approach doesn’t work so well for people with drug and alcohol addictions/dependancies.
Children raised in cultures where alcohol is soft- rather than hard-banned for young people, and gradually introduced to it with parents around (think European teenagers having a glass of wine with lunch), tend to have healthier relationships with alcohol in later life than those raised in hard-ban-until-18/21 cultures. I think exactly the same will prove true of phones.
There may be a massive confounding factor in the type of alcohol consumed.
The more permissive cultures tend to be beer- or wine-centric. I have never been deeply interested in addictology, but the few (older) works on alcoholism I have read mentioned that beer and wine drinkers tend to develop a different sort of relationship with alcohol than hard drink consuments, in the sense that they have a hard time abstaining entirely, but fewer of them develop into the full-blown "gin zombie" type.
I disagree. You see less depictions of beer or wine addicts despite them (at least from my experience) making up the majority of high-functioning alcoholics. I don't know for sure why they're depicted less, but my running theory is a combination of not being tragic enough for drama focused on alcoholism and being played for jokes with things like the "wine mom" stereotype. They also tend to be a lot better at hiding their alcoholism due to their type of drinking being more accepted. They have a different relationship with alcohol, but not necessarily a better one (arguably a more dangerous one due to the relative societal acceptance of their type of alcoholism).
That's the crux of the situation, though; on hard liquor, the slippery road to becoming a non-functional alcoholic is much steeper.
There also might be a gender difference. In my experience, men who drink wine, mostly drink with friends and self-limit. The sort of men who are prone to alcoholism won't be satisfied by mere wine and will proceed to hard drinks quick. On the other hand, women often drink wine alone and might develop a daily habit that degrades into full-blown alcoholism even without resorting to hard drinks.
FYI, I barely drink at all and I dislike sloshed people (incl. myself when I rarely get intoxicated; it is an unpleasant state to be in). But even hell has layers.
I might also be biased. My dad was a "high-functioning alcoholic" who primarily drank beer. I also suppose that my definition of high-functioning might be a bit different as I think it's just as dangerous as non-functional alcoholism because it's easy to hide. My dad hid his problem well, it was only when he almost killed himself by driving off a cliff into a lake while he was shitfaced drunk that he decided to sober up. If he wasn't as good as hiding it he might've been pressured into stopping before he did as much damage to himself and those around him
In my midwest area it seems like you can tell who are the alcoholics right away because they buy and drink cheap beers 90% of the time. Maybe to make themselves feel less like an alcoholic because they aren't drinking hard liquor, and it seems someone is more likely to say something if they see someone down a half+ bottle of vodka themselves, but nobody ever says anything seeing someone down 10+ beers.
These things are not comparable. Alcohol is so old a thing we not only built plenty of stable cultural norms around it but we even developed genetic adaptations.
And speaking of culture, as an Eastern European I would argue our rules regarding alcohol are not soft. Yes, we drink, even expected to drink on some ritualized occasions. But contrary to Hollywood depictions, it's not cool to be a non-functional alco in our lands. When society decides you can't manage yourself, it builds harsh zone of exclusion around you. Imagine you have an uncle Jim who is constantly doomscrolling and for that he has no chances with a good reliable woman, his job opportunities are limited to something non-prestigious, people talk about him like he's a dimwit, even kids look down at him. He's recognized as a failure of a man and parents don't miss a chance to remind about the bad example to their kids. That would be "not-hard" rules EE style.
They weren't always. In fact it took many centuries for this to happen. The history of cocaine in the US is quite interesting. It was being used everywhere and by everybody. Factory owners were giving it to their laborers to increase productivity, it was used in endless tonics, medicines, and drinks (most famously now Coca-Cola = cocaine + kola nut), and so on. You had everybody from Thomas Edison to popes to Ulysses S Grant and endles others testify to the benefits of Vin Mariani [1] which was a wine loaded with cocaine, that served as the inspiration for Coca-Cola.
So probably part of the reason it was so difficult to realize there is a problem is because everybody was coked out of their minds, so it all seemed normal. And I think the exact same is true of phones today. Watch a session of Congress or anything and half the guys there are playing on their phones; more than a few have been caught watching porn during session, to say nothing of the endless amount that haven't been caught! I can't help but find it hilarious, but objectively it's extremely inappropriate behavior, probably driven by addiction and impaired impulse controls which phones (and other digital tech) are certainly contributing heavily to.
I find it difficult to imagine a world in the future in which phones and similar tech aren't treated somewhat similarly to controlled substances. You can already see the makings of that happening today with ever more regions moving to age restrict social media.
> The history of cocaine in the US is quite interesting. It was being used everywhere and by everybody.
Be careful with that comparison. The cocaine infused drinks of the past are not comparable to modern cocaine use for several reasons.
The route of administration and dose matter a lot. Oral bioavailability is low and peak concentrations are much lower when drinking it in a liquid as opposed to someone insufflating (snorting) 50mg or more of powder.
You could give a modern cocaine user a glass of Vin Mariani and they probably would not believe you that it had any cocaine in it. The amount, absorption, and onset are so extremely different.
> So probably part of the reason it was so difficult to realize there is a problem is because everybody was coked out of their minds
That’s an exaggeration. To be “coked out” in the modern sense they’d have to be consuming an insane amount of alcohol as well. We’re talking bottle after bottle of the wine.
Be careful with these old anecdotes. Yes, it was weird and there were stimulant effects, but it’s not comparable to modern ideas of the drug abuse. It’s like comparing someone taking the lowest dose of Adderall by mouth to someone who crushes up a dozen pills and snorts them. Entirely different outcomes.
Vin Mariani was 7+mg/oz with a relatively low alcohol content which would have been further mitigated by the stimulant effect of the cocaine in any case. And then of course other concotions (including Coca-Cola) had no alcohol at all - Vin Mariani is just a fun example because of the endless famous names attached to it.
Obviously you're right that the absorption is going to be different and a modern coke head with high tolerance likely wouldn't even notice it had anything in it. But give it to a normal person, and they're indeed going to be coked out - in very much the same way that small doses of adderall to non-users can have a very significant effect. The obvious example there being college kids buying pills around around finals.
>Following your argument shouldn't anything that can induce addiction be controlled?
Depending on a risk profile -- totally. There are talks of taxing sugar drinks and not selling "energy drinks", which are coffeine + sugar, to kids for this very reason.
I also mean controlled in the broad definition, not as in the "controlled substance". The culture of consumption prescribed by society is a way of regulation too, more effective than laws even.
> it is claimed addicts often seek treatment after hitting "rock bottom".
From my experience it is often too late at that point. And actually hitting rock bottom is difficult and destructive, and leaves scars. As they say, preventing is better than curing.
They aren't physically addictive like alcohol or opiates but it's very clear that many people have a psychological dependency on them. Whether or not a psychological dependency counts as an addiction is up to debate (personally, I believe they are due to my experiences with self harm, which many people including myself were or are psychologically dependant on) but the differences are mostly semantic if they end up functionally the same
There are a lot of reasons for distraction while driving, but we don’t call all of them addiction on that premise. If a driver was not looking at his phone - maybe he’d be looking at something else. The phone is not the reason - it’s just a very suitable object.
this is thoroughly debunked with hard data from distracted driving laws that focus on phone use while driving. We have the luxury of both before and after data and across different jurisidictions.
I agree, but something being distraction does not automatically mean it’s addictive. Gear shifting is a distraction, but we don’t consider it addictive.
[...] the majority of research in the field declares that smartphones are addictive
Though that section continues on to disagree with that majority, "the majority" declaring smartphones are addictive is certainly supportive of them being so.
The main challenge is that the prefrontal cortex, which is responsible for impulse control and other things, only fully develops around age 25.
The problem with that is without some explicit instruction or guidance or invention before they have full control of their impulses, not everyone tames the beast unscathed.
> The main challenge is that the prefrontal cortex, which is responsible for impulse control and other things, only fully develops around age 25.
This factoid has been repeated for decades but it’s essentially a myth.
Brain development continues into your 20s, but there isn’t a threshold at age 25 where someone goes from having poor impulse control to being capable of good impulse control.
18-25 year olds are not children and are fully capable of having impulse control. That can continue to develop as they age, but it doesn’t mean age 25 is when it happens.
I would agree that actual children need some more explicit boundaries, which is also why we don’t allow children to do a lot of things that people over 18 can do.
I don't think anyone is saying impulse control goes from 0% to 100% on everyone's 25th birthday, like flicking a switch. But is it not reasonable to say that a 25-year-old will have significantly better impulse control than they had when they were 18? (And that their 30-year-old self probably has a similar level of impulse control as when they were 25?)
I owned a string of fast food restaurants. I had the ability to not hire anyone under age 20 if I didn't have to. When I did, the requirement was that they be in college but, in every case, I found that these kids, who returned for summer work every year, did a lot of growing up between the ages of 18 and 20.
I think this persists because in most studies, 25 - and other specific ages for further stages - comes from plotting a distribution of the traits being analyzed and pulling averages from that. We like precision and fixate on the numbers but they really mean "the majority of the observed change under study appears to occurr within a specific range" but that does not make catchy headlines for the public.
As someone who graduated from high school in 2025 I completely agree with this. I am glad I had to work it out on my own, and I don't think this is a place that a school should take control. If I had to figure this out along with the stress of college, I don't know if I would be able to handle it. I also think that it has helped with my overall time management skills and prioritizing my time.
I know not everyone will have the same experience as me, but I just feel like learning to manage it on my own was overall beneficial for me in the end.
I think the problem is that most students, (as this study shows) are not figuring it out on their own, at least not in high school. It feels like you're one of the outliers, not the common case.
Having rules about what you can and can't bring into school is nothing new. I went to high school in the 90s, and there were plenty of things we weren't allowed to bring with us into class; back then, the closest analogue to smartphones would have been pagers, probably.
It seems entirely reasonable to ban smartphones (and dumb phones, even) from schools. Frankly, I think it's absolutely insane that they were ever allowed.
And sure, maybe these students who go to high schools where smartphones are banned will get to university and go nuts, sitting in lecture halls with their phones out all the time. They'll learn very quickly that their grades will suffer, and will clean up their acts or fail out of school. But this is like everything else: the first year of university is the big year of independence, of being away from parents for the first time, and college students do plenty of dumb things in the name of that independence. That's always been the case; I'm no stranger to that phenomenon myself. They either work it out on their own, or they fail out.
Similar situation as you, I switched to a flip phone and now use my old iPhone as a glorified youtube machine when I'm too lazy to go to my desk or don't feel like dealing with my tablets poor wifi range
> As a scouter working with teenagers, I feel that most kids with a supportive backgrounds will tame this beast for themselves eventually…
Fellow Scouter here. Lots of Scout units in the USA have cell phone bans. That’s such an obsolete policy. We need to help the Scouts model good choices, and that doesn’t happen when decision opportunities are removed.
Also, if they are buried in their phones, take that as feedback on how much fun they are[n’t] having in your Scout unit.
Of course, but they can't be on adventures 24/7, that's the point.
Kids aren't supposed to have fun 24/7. It's impossible.
The problem is what happens the moment the "fun" stops. If everybody reaches for their phones, then that's an issue that cannot be fixed by your "just have more fun" mindset.
Part of the lesson is understanding how we got here.
The answer is, of course, liberal hyperindividualism. By that I don't mean "liberal institutions" or respect for the individual person especially in the face of collectivism, but an ideology of antisocial atomization of the self that thrusts the self into subjective godhood. Paradoxically, this makes people more susceptible to control in practice.
Now, ideological and political programs don't fully realize the consequences of their premises instantly. It can take years, decades, centuries for all the nasty errors to manifest and become so conspicuous that they cannot be ignored. The Enlightenment program in our case. And so, in this hyperindividualism, the social order - its layers, its concentric circles, its various rights and demands on the individual that precede the consent of the individual - is all reduced progressively to not only the consensual, but also the transactional. Social bonds and structures evaporate or become fluid and contingent merely on the transactional; commitment and duty are a prison. Consent as the highest and only moral law leads us to relativism, because if all that is needed is consent to make an act moral and good, then naturally what is morally good will vary from person to person, and even minute to minute for a given person. On top of that, consent can be attained through manipulation and power, and so now individuals joust for power to manufacture consent in order to bless their exploitation of others.
The self cannot be limited in any way according to this program, and any residual limits are the lingering chains of some ancient past.
Perhaps most amusing is how so-called "countercultural" movements are anything but. These are typically just advancing the ideological program, not rejecting it. Contradictions between such movements and the status quo often come in the form of a tension between residual cultural features of an earlier age and the greater faithfulness to the trajectory of the program among the countercultural. Typically, conflicts are over power, not belief. And sometimes, the internal contradictions of the program lead to diverging programs that come into conflict.
Leaving out the word liberal as I don't really understand its context here, individualism was at one time a boon for the nation/economy. People move out of their family homes early, start their own family, chart their own path. Good for capitalism. And good for lots of things, really, a lot of America's success can be traced back to it.
But man, social media and the internet age have really exploited it to an unhealthy and unproductive point.
I remember going to college for the first time in 2000, and having an absolute blast meeting the people I was by circumstance forced to be around. Went back in 2004 and it was completely different, everyone was on their phone, maintaining their personal bubble in what should have been an age of exploration. That made me rather sad.
Today it's even worse, but at the risk of being an old man yelling at clouds, I won't drone on. I mostly wish my own children could experience the upbringing I had, as I find this one rather dystopian and depressing.
> Leaving out the word liberal as I don't really understand its context here
I mean "liberal" in the philosophical sense, not the ill-defined, often pejorative partisan sense (in a philosophical sense, both major US political parties are liberal parties; we live in a liberal political order). One can support liberal institutions while rejecting the ideology along with its false anthropology, presupposed metaphysics and thus ethics.
The basic failure of liberalism lies in its definition of "freedom" which boils down to the ability to do whatever you choose, an absence of any restraint or constraint. Compare this with the classical definition of freedom as the ability to do what is objectively good. True freedom only exists in being able to exercise your nature as a human being. That's what flourishing means. The heart of such freedom is virtue and thus morality. The ability to do drugs or watch porn or sleep around or whatever is contrary to the good of the person doing those things. They do not make a person free. Immoral acts imprison and cripple the person committing them in the very act of committing them.
> individualism was at one time a boon for the nation/economy
I'm not talking about economic freedom. Economic freedom is always subject to various constraints. Some (good) regulation is necessary to protect the common good on which we all depend.
I'm talking about an anthropology that conceives of human beings in a way that denies or misrepresents their social nature and denies their obligations and duties toward others, and misunderstands freedom.
> People move out of their family homes early, start their own family, chart their own path.
I'm also not talking about having the liberty to make all sorts of life choices. What would the alternative be? And people today aren't moving out of the house. They're living with mom and dad into their 30s, maybe longer. Yet liberalism marches on.
And that's perhaps part of the lesson. If we draw out the conclusions of liberal premises and cross them with human nature and the human condition, we find that liberalism's inner contradictions cause it to implode on itself, producing what might appear to be paradoxical results. After all, shouldn't liberalism have given us a freer, better world? This is the part where its defenders will blame external factors, which raises all sorts of new questions about how that is possible.
The older wisdom was that you worked on the farm with your husband and children for your entire life, breastfeeding while you peeled the potatoes, putting down your spindle to comfort a crying child. Millers lived in the mill; even blacksmiths lived at their smithies. Except for rituals, separate spheres with separate hard constraints was a novelty of the Satanic mills where the Victorian proletariat toiled.
They still had clear boundaries. They slept in the sleeping place and at the sleeping time, they worked at the working place and at the working time. See, they didn't have smartphones to fiddle with in bed.
> they didn't have smartphones to fiddle with in bed
This is solvable for people who want to. We have a dedicated charging station in our house for all electronic devices. Before bed, all of those devices get put there. Including me and my wife's phones.
This definitely is the way to do it. I have started keeping my phone in my living room at night instead of my bedroom, but am still bad about doing this every night. Phones are addictive and it is mentally hard to break out of the addiction. It is essentially a "you just have to do it" situation, but "just do it", while technically simple, is still difficult if you're addicted.
I've made my peace with the tiny, tiny chance that I might miss my father's last moments because I didn't hear about his heart attack til the morning, for example.
Living as if it might happen any time and I must be available for it is not healthy IMO.
I figure it's a legitimate concern. One of my older brothers keeps his phone away from his room, but not close enough to hear it ring. About 10 years ago my little brother died unexpectedly in the middle of the night and all I could do was leave my older brother a message about it. He was beating himself up about it the next morning when he got the voicemail. Not that it would have changed anything, by the time I was notified it had already happened, there was no final moments to miss. But I suspect my brother doesn't keep his phone so far from his bed now.
I put my phone in a drawer. Everything's in silent mode. I have a fully disconnected, distraction-free iPad for reading and writing. Work only happens on the computer. There are no emails on the phone.
Yet, I can't fully disconnect. Every device, every account, every app mixes work stuff and personal life stuff. And software is so sticky! I can't just check one thing without my attention getting stuck on a notification badge, an email, a feed or some other thing that I should not pay attention to right now.
I also struggle with this, but I have found some metaphorical band-aids that help a bit.
My phone's SIM no longer has any credit on it. I actively cannot browse mindlessly in a lot of places. Doesn't work perfectly, half of public transport around here has free WiFi, as do some shops, but it helps.
I have three laptops. One with the games on (Steam, Windows and nothing much else, no passwords installed except Steam… oh and Discord but I don't actually log in because the content was never interesting enough to get addicted to in the first place); one as a work machine (mac with Xcode, claude etc. installed); and one as a down-time machine (also a mac, but only co-incidentally).
Facebook itself isn't installed anywhere, though the Messenger app is for family I otherwise can't reach; various time-hungry sites (including FB, X, here*, reddit, several news sites) are blocked as best as I can block them (harder than it should be: on iPhones the "time limit" tool doesn't allow "zero" and reflexes to tap "ignore limit" are too quick to form, on desktop it's increasing ignoring my hosts file).
YouTube has so many ads, it's no longer possible for me to habit-form with it. Well, that and the home suggestions are consistently 90% bad, and the remaining 10% includes items in my watch-later playlist that I don't get around to watching.
* see my comment history for how well that attempt at self-control is actually working.
I started by just turning off notifications for anything that isn't needed for important people (eg: friends and family) to reach me. I got rid of most social media[0] a little while after that. Another thing I did when I still used a smartphone was remove all the apps from my home screen. At least on iOS, notification badges don't appear in their slide-out app tray thing. For quick access to essentials I used an app that provides a widget for hyperlink like launching of apps
[0]HN, Reddit, and Tumblr are the exceptions for me. I have notifications off and those platforms tend to invite more nuanced discussions and be less distracting over all
The first step is to understand why you can't disconnect. Ways to handle it will be different based on that.
One reason might be some kind of physical/psychological addiction (either to apps themselves or the act of looking at your phone). One reason might be that what you're doing is more boring than what you normally do on your phone.
Honestly, I don't. I go through phases. I have a tampermonkey script that blank-screens sites and that's been very effective. Reddit is a tough one because there's a ton of useful information on there, but once you're on it it's easy to start scrolling. You could be extreme and get a device just for work, perhaps with google voice and wifi only to save on a membership fee
I learned to hate smartphones, so I threw it literally away. People can write me an email or call me on my landline. On the desktop I am using Debian with WindowMaker. This is enjoyably distraction free. I am a free man.
One of these days I'm going to get an old thinkpad, install OpenBSD on it, and switch the majority of my computing to TUI programs. Lynx renders websites remarkably well and keeps me away from the seedy stripmall the internet of today is. Perhaps I'll start posting on Usenet, I hear there are at least a few people still there
I had an early experience with a Palm III and a cell modem strapped to it. It was intoxicating. I still find the pull of the phone to be very strong sometimes. It's an ongoing battle to maintain a healthy relationship with it. Such a useful tool, but also a massive time suck if you let it.
And now we're clamouring to reinstate them. Not just digitally (in the form of e.g. limitations and boundaries on attention demanding apps and activities), but politically / internationally as well, if you lean that way.
I've always had a TV or screen of some sort, devoted to background music or light films, just to fill in the void between lines of code. For some, having such light stuff going on is a productivity booster. I once got a dev team that had been struggling to get things finished, well and truly over the finish line, by putting a fat TV in the room, and giving folks the ability to line up their playlists for the day, as long as it wasn't too violent/inappropriate for the workplace.
We side-watched a ton of stuff together as a team - it was great for morale - and we actually shipped stuff, too. Of course the TV eventually became a console for the build server, but it was always available to anyone to put something on in the background, if they wanted to. Definitely a nice way to get a team to be a bit more coherent - as long as whats being played isn't too crazy.
I work in construction saas of a certain kind, and when I visit customers there is a very very clear difference in quality/size/revenue in companies that allow headphones and those that don't.
I'll let you decide which ones you think are doing better.
Hah. When I write concrete statements someone always comes by and says that's wrong, or a bigoted view. So Ive been trying to see if letting people figure out the obvious answer themselves works better.
Obviously the place without headphones do way better.
You could even argue that society is incapable of not running into these cycles of building wisdom and losing it. Our minds are differential.. things that are here have less value, we seek newness no matter what.
I am also older and I see that my kids don't have certain things that I perceived as disadvantages at the time but may have helped develop useful habits. These things include quiet and boredom, which helped with focus; lack of ready answers or information, which may have helped imagination or generative reasoning.
I think we can recreate these things if and when we need to, but that recreation may be for the elites. I heard an interview with a professor who said he had to reintroduce Socratic exams to get around chat bots and the fact that kids now have very poor handwriting. At an elite school you can do that.
I don't think this is something just for elites at all, because so much of this happens at the home. So for instance I completely agree on the boredom and have factored into how I raise my children. Similarly, I also agree on the importance of not having answers simply handed to you. Another one as well is realizing that not everything you're told is true, which is a big part of the reason that I ultimately decided that Santa exists for them. And it makes me wonder if that wasn't the point all along, because it doesn't feel right to lie to your kids for years.
> rediscover the hard way the older wisdom about having separate spheres with separate hard constraints and boundaries on behaviors
This is something I also believe. Thanks for saying it.
I've been thinking and reflecting a lot on what I've been calling for myself "generative constraint". It's sure as heck not something that is the same for everyone, but I think we all have a set of them that might help us be our best person.
We've universalized constraints and expansivenesses in a way that seems really poor judgement. And yes, there is a capitalist critique in this too, as any good theory should have :)
I think of it as "introducing friction". There's a lot things that we do now which is largely as a result of frictionless ease of doing it. Smartphones and social media are the obvious one, but it applies to many technology/digital driven behaviour (pay with face id/touch and people end up consuming more for instance). And it's no surprise to me that what works for a lot of people is putting their phone somewhere else in their house. Essentially introducing artificial friction.
My slightly cynical view is for many of us we're more often lazy than not and default to doing the most frictionless thing. Introduce friction and very quickly I find it forces you to think about what you're actually doing
For certain tasks for me, having a movie running while I'm working is more productive. It gives something to take your attention when you have to wait for something without getting sucked in to endless scrolling.
I really try to avoid anything about politics here, but I recall there already being a controversy about this back in May of 2024. Specifically, there were public comments from the new CEO to investors about Cracker Barrel needing to change the demographics of the customers who ate at Cracker Barrel, and, depending on your point of view, some people interpreted the way the comments were said as suggesting that there was something morally suspect about how non-diverse, non-inclusive, old-and-white-and-straight the current dining demographics at Cracker Barrel were. There was a small right-of-center online public outrage du jour about it at the time. I'm not interested in litigating what the CEO said or how justified the outrage was, just noting precedent.
So there already was a pre-existing history here for people who are sympathetic to this point of view, particularly coming as it did shortly after some similar Bud Light and Target controversies.
They did need to change the demographics. It is currently overwhelmingly “a bunch of old people who are going to die soon”. The same as the CBS network.
"But that's not how real life works at all, right?"
How real life works is always a plausible interesting goal, but it's very often at odds with a bunch of other valuable goals for players.
A particular sharp example of this is sports video games. It might well be interesting (and certainly realistic) to simulate bad referees in a sports game. Horrible blown calls by tennis line judges, or missed calls by basketball refs, or bad umpire calls on pitches. Real-life soccer makes working the refs and their inability to see everything an art form, as far as I can tell.
Perhaps that's interesting, but the irony here is that real life refs are actually bad simulations of the original perfect game code in the first place, from a certain point of view. I think debates about the use of instant replay in sports gets at the heart of this, and one could imagine using real-time AI to help refs taking this conversation much further.
I think the sports case is a particularly sharp example, but it definitely holds with all sorts of choices in games.
For Animal Crossing in particular, I remember when I finally played it, it struck me after a while how much it had in common with recent MMOs (Everquest and World of Warcraft) that I had had fellow game developer friends have their lives severely disrupted by. And when I played the original Animal Crossing, I remember noticing specifically how careful the designers were in having players use up every bit of interesting content in a day after 45 minutes or an hour, so that eventually you'd run out of things to do, and that was the game's signal to put it down and pick it up again the next day. And I remember being struck by how intentional it was, and how humane it was... particularly given their goal of wanting to make a game that was asynchronously coop (where different family members could play in the same shared space at different times of day and interact asynchronously). As a game designer myself, I really respected the care they put into that.
Anyway, that's my immediate thought on seeing this (fascinating, valuable) experiment with LLM dialogue in Animal Crossing. The actual way NPCs work in these games as they are has been honed over time to serve a very specific function. It's very similar to personal testimonials by paid actors in commercials; a human expressing an idea in personal dialogue form triggers all sorts natural human attention and reception in us as audience members, and so it's a lot more sticky... but getting across the information quickly and concisely is still the primary point. Even dialogue trees in games are often not used because of their inefficiency.
I totally think that there will be fascinating innovations from the current crop of AI in games, and I'm really looking forward to seeing and trying them. I just think it's unlikely they will be drop-in replacements for a lot of the techniques that game developers have already honed for cases like informational NPC dialogue.
I don't mean to imply that realistic is always better, just that there are other ways to figure out when to stop talking to someone. And I think the current method is actually quite bad for immersion and building empathy in the player.
A few years ago, I was feeling dispirited about being middle-aged and had come around to the conclusion that, at least when playing games, my general dissatisfaction and "meh" response to the games I was playing was probably a function of my age rather than anything about the games themselves. I was enjoying some games to an extent, but I wasn't being really grabbed by anything, and I was having a hard time sticking with much that I was playing. It seemed like a reasonable just-so story, and a particular exhausting one if you make games and theoretically are supposed to like them.
And then I picked up Hollow Knight, was utterly sucked into it in a deep way, couldn't put it down, and came out the other side doing the Principle Skinner meme - "Am I so out of touch? No, it's all those other games that have been wrong..."
So thank you Team Cherry, for helping remind me that 1) I really can love games deeply, even in my tired middle-aged-ness, and 2) sometimes the problem isn't that a person is being too judgmental, the problem is that the the lofty potential of their ideals really is, perhaps, justified, and other creative people (for a variety of understandable reasons, really - making games is a hard and costly business) mostly aren't even really aiming for such things.
When I was young there were two types of games I tended to enjoy: single-player games (e.g. Nethack, Half-life, Starcraft, & others with a good story and gameplay, or just deep gameplay) and LAN party games (e.g. Unreal Tournament, Counter Strike, Total Annihilation, Quake II, and similar). LAN party games were more fun at LAN parties than online, and not just because server browsers all kind of sucked. Playing along with other people you can see in the same room is a very different experience from playing along with other people you've never met, can't see, and will never encounter again.
These days my friends are scattered across the country, with jobs & families, and so LAN parties are basically dead. And many new games don't even support LAN play, instead they tend to be optimized for online play with some sort of ranking system.
That leaves single-player games. And really good single-player games are rare, just like really good anything is rare. I find a lot of story-driven singleplayer games have good stories, but crap gameplay, so it's frustrating to try to complete the story. If the story is good enough & the gameplay bad enough I'll just cheat & treat the whole thing more like a book or movie instead of a game, but for a lot of games I just don't bother even with that.
But occasionally a game grabs me. The story is great, and the gameplay is at least good enough, or it's just really good gameplay that stays engaging for a long time (e.g. Slay the Spire). These are few & far between, because making really good games is very difficult.
As I age my tolerance for mediocrity decreases, partly because I already own a whole bunch of still-engaging games I can always play. So I agree with your points. The really great games are rare, far rarer than best-selling games.
FWIW there is a new-ish kind of intermediate genre between classic LAN/ranked multiplayer and single player, which is the whole “survival” genre. Generally speaking, they can be played as single player games, but also allow for small-scale co-op, synchronously or asynchronously. So even if you and a buddy have different schedules, you can make progress separately but still occasionally play together.
Valheim, Grounded, Ark, Satisfactory are a few among many others.
I do most of my gaming in the single-player indie space these days. It's really where the fun is. You have a deep time-tested catalog of beautiful and complete experiences that don't try to nickel-and-dime you with drip-fed content or recurring microtransactions. They're games first and foremost, not extraction machines. It's the opposite of the BS you see in big-budget titles.
Absolutely. I've been playing a lot of Stationeers (which, wildly, requires writing assembly on the in-game chips) and Satisfactory lately. Both are clearly labors of love by their small dev houses.
Same here. Hollow Knight and Elden Ring are the only two games in the last decade that I've put more than a few hours into. E.g I used to love Civilization, but none of them since 4 have done it for me. Same with Simcity2000. I'll play Madden or Fortnite with my kids, but I'm done mentally after 20 minutes.
The last game I liked like these was Morrowind back in 2004 or so. One of the great things about being a parent is sharing these kinds of things with your kids. I've already got Silksong downloaded on our Switch and XBox to play together when they get home from school in ~1 hr.
I believe there is a function of age to some degree, I 100% Assassins creed 2 at 14 and now I have a decade and a half of watching studios remake that goddamn game. They're all trying to make the best practice, safest game they can to reach the widest audience and end up bland with nothing new to offer those of us that have been playing a longer time.
Almost all my favourite titles of the last decade have been smaller titles, even the ones I bounce off I can appreciate them for trying something and missing the mark, there are genuine amazing works of art out there that a large studio simply can't produce.
I don't think the AAA games are 'wrong', to my bewilderment assassins creed sells like crazy each year despite near everyone in my friendship circle tapping out after the pirate one a decade ago, it's just if you play more than a couple things a year you outgrow the 'mainstream' titles.
The thing that stuck with me after learning about it is that AAA games aren't called AAA because they are supposed to be the best of the best or the most advanced.
AAA games are named after AAA investment ratings. A AAA game is supposed to be the most profitable investment for the publisher paying the upfront investment. And the market has gotten saturated with enough customers that doing new things to get more customers is more risky than doing the same thing to keep your existing customers.
I typically tend toward indie/small games as well, but there arw definitely some masterpieces put out by large studios. Have you played Red Dead Redemption or Cyberpunk? The amount of fidelity and content and refinement are just unmatched. I can't recommend them enough.
Also, if you like first-person puzzlers I recently picked up Supraworld and instantly fell in love, it's a gamer's game for sure and is one of the best platformers I've played in quite a while.
I've been keeping reviews in the last few years. Just privately, for myself. I started doing this because I couldn't remember what I did and didn't play, and had a "wait, I think I tried this before and didn't like it" deja-vu a few times.
Right now the rankings are: bad (388), meh (191), okay (71), good (63), superb (12). Turns out I dislike a lot of games. This is also why I started to just pirate things first and then buy if I like it; I have 558 games in my GOG library and I barely played (or like) >80% of it.
I can recommend keeping reviews by the way; I've since started doing this for tons of stuff, from games to films to TV episodes to wine to coffee, and writing things down really helps narrow down what you like or dislike about things. By keeping it private you can write whatever you like and don't need to do a "full" review. For example my entire review for Ninja Gaiden: Ragebound (rated "meh") is "Too fast-paced for my liking. Also don't really like the controls." And for me, that's enough.
I can write a long essay on why I like or dislike games, but to be honest I'd rather be playing Silksong.
Going over things, the dividing line between "good" and "superb" is somewhat arbitrary, so I included both, because well, why not? I did it like that to mimic the commonly used "5 star" rating, but maybe it should just be three: "bad", "okay", "good". Dunno.
Also note that I haven't played many games. I'm just now getting around to The Witcher 3, which is over ten years old. So...
"-" starts a new entry, followed by one or more titles, followed by "tag: value", followed by whatever I wanted to write (if anything) as Markdown. It lists "superb" first, alphabetically, then "good".
have you played ori and the blind forest? that's another nice single player platformer, though it eventually proved too hard for me (can't really do the pixel-and-timing-perfect moves and don't much enjoy trying)
I'm probably saying something obvious here, but it seems like there's this pre-existing binary going on ("AI will drive amazing advances and change everything!" "You are wrong and a utopian / grifter!") that takes up a lot of oxygen, and it really distracts from the broader question of "given the current state of AI and its current trajectory, how can it be fruitfully used to advance research, and to what's the best way to harness it?"
This is the sort of thing I mean, I guess, by way of close parallel in a pre-AI context. For a while now, I've been doing a lot of private math research. Whether or not I've wasted my time, one thing I've found utterly invaluable has been the OEIS.org website, where you can just enter sequence of numbers and then search for it to see what contexts it shows up in. It's basically a search engine for numerical sequences. And the reason it has been invaluable is that I will often encounter some sequence of integers, I'll be exploring it, and then when I search for it on OEIS, I'll discover that that sequence shows up in much different mathematical contexts. And that will give me an opening to 1) learn some new things and recontextualize what I'm already exploring and 2) give me raw material to ask new questions. Likewise, Wolfram Mathematica has been a godsend. And it's for similar reasons - if I encounter some strange or tricky or complicated integral or infinite sum, it is frequently handy to just toss it into Mathematica, apply some combination of parameter constraints and Expands and FullSimplify's, and see if whatever it is I'm exploring connects, surprisingly, to some unexpected closed form or special function. And, once again, 1) I've learned a ton this way and gotten survey exposure to other fields of math I know much less well, and 2) it's been really helpful in iteratively helping me ask new, pointed questions. Neither OEIS nor Mathematica can just take my hard problems and solve them for me. A lot of this process has been about me identifying and evolving what sorts of problems I even find compelling in the first place. But these resources have been invaluable in helping me broaden what questions I can productively ask, and it's through something more like a high powered, extremely broad, extremely fast search. There's a way that my engagement with these tools has made me a lot smarter and a lot broader-minded, and it's changed the kinds of questions I can productively ask. To make a shaky analogy, books represent a deeply important frozen search of different fields of knowledge, and these tools represent a different style of search, reorganizing knowledge around whatever my current questions are - and acting in a very complementary fashion to books, too, as a way to direct me to books and articles once I have enough context.
Although I haven't spent nearly as much time with it, what I've just described about these other tools certainly is similar to what I've found with AI so far, only AI promises to deliver even more so. As a tool for focused search and reorganization of survey knowledge about an astonishingly broad range of knowledge, it's incredible. I guess I'm trying to name a "broad" rather than "deep" stance here, concerning the obvious benefits I'm finding with AI in the context of certain kinds of research. Or maybe I'm pushing on what I've seen called, over in the land of chess and chess AI, a centaur model - a human still driving, but deeply integrating the AI at all steps of that process.
I've spent a lot of my career as a programmer and game designer working closely with research professors in R1 university settings (in both education and computer science), and I've particularly worked in contexts that required researchers to engage in interdisciplinary work. And they're all smart people (of course), but the silofication of various academic disciplines and specialties is obviously real and pragmatically unavoidable, and it clearly casts a long shadow on what kind of research gets done. No one can know everything, and no one can really even know too much of anything out of their own specialties within their own disciplines - there's simply too much to know. There are a lot of contexts where "deep" is emphasized over "broad" for good reasons. But I think the potential for researchers to cheaply and quickly and silently ask questions outside of their own specializations, to get fast survey level understandings of domains outside of their own expertise, is potentially a huge deal for the kinds of questions they can productively ask.
But, insofar as any of this is true, it's a very different way of harnessing of AI than just taking AI and trying to see if it will produce new solutions to existing, hard, well-defined problems. But who knows, maybe I'm wrong in all of this.
I was in the game industry when we originally transitioned from C to C++, and here's my recollection of the conversations at the time, more or less.
In C++, inheritance of data is efficient because the memory layout of base class members stays the same in different derived classes, so fields don't cost any more to access.
And construction is (relatively fast, compared to alternatives) because setting a single vtable pointer is faster than filling in a bunch of variable fields.
And non-virtual functions were fast because, again, static memory layouts and access and inlining.
Virtual functions were a bit slower, but ultimately that just raised the larger question of when and where a codebase was using function pointers more broadly - virtual functions were just one way of corralling that issue.
And the fact that there were idiomatic ways to use classes in C++ without dynamically allocating memory was crucial to selling game developers on the idea, too.
So at least from my time when this was happening, the general sense was that, of all the ways OO could be implemented, C++ style OO seemed to be by far the most performant, for the concerns of game developers in the late 90's / early 2000's.
I've been out of the industry for a while, so I haven't followed the subsequent conversations since too closely. But I do think, even when I was there, the actual reality of OO class hierarchies were starting to rear their ugly heads. Giant base classes are indeed drastically bad for caches, for example, because they do tend to produce giant, bloated data structures. And deep class hierarchies turn out to be highly sub-optimal, in a lot of cases, for information hiding and evolving code bases (especially for game code, which was one of my specialties). As a practical matter, as you evolve code, you don't get the benefits of information hiding that were advertised on the tin (hence the current boosting of composition over inheritance). I think you can better, smart discussions about those issues in this thread, so I won't cover them.
But that was a snapshot of those early experiences - the specific ways C++ implemented inheritance for performance reasons were definitely, originally, much of the draw to game programmers.
I was knee-deep working as a technical game designer + engine programmer on Soldier of Fortune when Half-Life came out. I can't put into words the impression that the opening that game left on me; I still remember very distinctly experiencing the tram ride, just being utterly entranced, and then being deeply irritated when an artist walked over to my cubicle, saw the game, and jokily asked what was going on, pulling me out of the experience. For me, it was one of those singular experiences you only have a very, very rarely in gaming.
It's funny, though - I would say in retrospect that Half-Life had the typical vexed impact of a truly revolutionary game made by a truly revolutionary team. In terms of design, the Half-Life team was asking and exploring a hundred different interesting questions about first person gaming and design, very close to the transition from 2d to 3d. And their influence, a few years later, often reduced down to a small handful of big ideas for later games influenced by them. After Half-Life, because of the impact of their scripted sequences, FPS games shifted to much more linear level designs to support that kind of roller coaster experience (despite many of Half-Life's levels actually harkening back to older, less linear FPS design). The role of Barneys and other AI character also really marked the shift to AI buddies being a focus in shooters. And the aesthetic experience of the aggressive AI from the marines as enemies also cast a long shadow, too, highlighting the idea of enemy AI being a priority in single player FPS games.
Certainly, those were the biggest features of Half-Life that impacted our design in Soldier of Fortune, which did go on to shift to much more linear levels, much more focus on scripted events, and would have resulted in much more emphasis on AI buddies too if I hadn't really put my foot down as a game programmer (and in my defense, if you go back to FPS games from that era, poorly implemented AI buddies are often, by a wide margin, the most frustrating aspect of that era of games, along with forced poorly done stealth missions or poorly implemented drivable vehicles - the fact that Barneys were non-essential is why they worked well in the original Half-Life). You can see this shadow pretty clearly if you compare Half-Life to, afterwards, the single player aspects of Call of Duty and Halo. Both are series that, in their single player form, are a lot more focused and a lot less varied than Half-Life was, but they clearly emphasize those aspects of Half-Life I just mentioned. And in practice, those were the single player FPS games that were in practice actually copied for quite a while.
Thank you for soldier of fortune. It was a pioneer in its own way with how brutal the enemy destruction was. I loved it. Incredible game that is up there with DOOM, Half Life and Halo for me.
Hey, thanks! I ran myself ragged on that project, so that's nice to hear. And yeah, I think we really did nail a particularly kind of visceral experience.
I went to college in 1995, and my very first week of school, I was introduced to the internet, usenet, ftp, and netscape navigator. A few months later, I was downloading cool .mod files and .xm files from aminet and learning to write tracker music in Fast Tracker 2, downloading and playing all sorts of cool Doom wads, installing DJGPP and pouring over the source code for Allegro and picking up more game programming chops, and getting incredibly caught up in following the Doom community and .plan files for the release of Quake.
Then Quake came out, and the community that grew up around it (both for multiplayer deathmatch and for QuakeC mods) were incredible. I remember following several guys putting up all sorts of cool experiments on their personal webpage, and then being really surprised when they got hired by some random company that hadn't done anything yet, Valve.
There was really just this incredible, amateur-in-the-best-sense energy to all those communities I had discovered, and it didn't seem like many people (at least to my recollection) in those communities had any inkling that all that effort was monetizable, yet... which would shortly change, of course. But everything had a loose, thrown off quality, and it was all largely pseudo-anonymous. It felt very set apart from the real world, in a very counter cultural way. Or at least that's how I experienced it.
This was all, needless to say, disastrous to my college career. But it was an incredible launching pad for me to get in the game industry and ship Quake engine games 2 years later, in many cases with other people pulled from those same online communities.
I miss that time too. But I think there's something like a lightning in a bottle aspect to it all - like, lots of really new, really exciting things were happening, but it took some time for all the social machinery of legible value creation / maximization to catch up because some of those things were really so new and hard to understand if you weren't in at the ground floor (and, often, young, particularly receptive to it all, and comfortable messing around with amateur stuff that looked, from the outside, kind of pointless).
It's a different thing nonetheless. I don't think that the thing that makes the modern web bad is that the "unwashed masses" are using it (as several commenters here assert), it's the commercialization.
The web is no longer a place for people to be able to interact freely with each other. It's a place to monetize or be monetized. That means that a lot of the value of the web is gone, because it's value that can't be monetized without destroying it.
The "unwashed masses" (your words) are only here because companies that want to advertise to them made their systems just good enough to draw them in but just bad enough they exploit the worse instincts in people to make more advertising money.
If the web was not commercial then it wouldn't be mainstream. While they are different they are fundamentally linked.
> Libraries and highways are very mainstream and are not commercial.
I would argue that the highways are actually a counter-example of what you are saying. They exist to connect workers to businesses, businesses to other businesses, and businesses to consumers. While there is certainly an amount of traffic on the highway that is not doing those three things, we have a name for the first one in any populated area - rush hour. To say that the highway system was not intended to facilitate commerce is just historically inaccurate.
The difference between the highway system and the Internet is that the creation of the Internet was not intended to facilitate commerce - it in fact took several years (1991-1995 as best I can tell) for it to officially be allowed as the neolibs in government did not want to keep funding the network. That choice is why we are where we are with the Internet - the good and the bad.
Nice response. It's true that highways carry both commercial and non-commercial traffic, and that trucks and commercial vehicles clog up highways and make it worse for non-commercial traffic. There is also a difference between the internet (communication infrastructure) and the web (stuff that uses it), which I was wary of, so the analogy isn't perfect in OP's context.
But the vision of an information "superhighway" should be something that is better than regular highways. The good news is that network bandwidth is much easier to add than highway lanes, and is increasing at a much faster rate than human bandwidth.
I'm using the web as a synecdoche for the Internet as a whole because before the Web there wasn't much of a reason for Joe and Jane Q Public to use the Internet.
The Internet was intentionally commercialized and privatized as a third step in its development, from DARPA project, to education/research network, to what we have today.
Mainstreaming is a side effect of its broadening scope; as college students graduated and scholars took their work home with them, the NSFnet backbone was ceded to Sprintlink, and OS/hardware developers started working on consumer-grade interfaces.
I miss the Internet of the 90s because every page I visit today has a pop-up asking me if I want to accept cookies. It makes browsing the open web a jarring experience.
Why the EU didn’t require an ability to do global opt-opt while forcing web sites to implement this feature is a mystery to me.
Well, it became mainstream by the late 1990s (IIRC the pets.com superbowl ad was 1999). But in 1992 or so it was still a bunch of Gopher sites (this new fangled "World Wide Web" will never displace this technology...) and MUDs being used by college students and hobbyists.
In 1993, UNC Charlotte had one computer in its library with a big sign next to it explaining what the World Wide Web was. It would be a few years yet before home computers became commonplace in that region, and late ‘90’s before everyone was more likely than not to have a computer at home, and to have some sort of dial-up internet. I was purchasing domains in 1997-1998 for $3/ea, I believe (I wish I could have known then what I know now…). I sold my first website design job in ‘96, which would probably coincide with when many businesses around Charlotte were establishing websites for the first time.
Fun to think about.
In this context, “mainstream” may just be another way to describe Web 2.0.
I think this is genuinely true. The internet today appeals to the lowest common denominator, in the same way that blockbuster movies often do. It is less appealing because it is less specific to our tastes.
Similar story here, with similarly disastrous impacts on my GPA. There was something magical about that time - technology was moving so rapidly and access to information was exploding. It was all so very early that it seemed like anything was possible for an aspiring computer nerd with a good computer and a fast internet connection.
Of course, it was also really unevenly distributed. If you were on the "have" side of the equation - i.e. in a setting like a college campus, already working in the industry, or in the right IRC channels, with access to modern hardware - you could hop along for the ride and it felt like anything was possible. Otherwise, you were being left behind at a dramatic rate.
Overall things are better now, because so many more people have access to data and resources online. It's trivially easy to learn how to code, information is readily available to most of humanity, and access to good quality internet access has exploded. But I can't deny that it was kind of amazing being one of the lucky ones able to ride that wave.
Same here, the Internet, game modding, early LAN->Internet bridges for multiplayer gaming, IRC and all that probably reduced my GPA by about -1.0 and that caused me to miss out on the "premium" tech employers early in my career, ultimately set me back decades. Thank you, rec.games.computer.quake.* hierarchy and Quake-C mailing lists.
the AI image generation and 3D printing community had a similar kind of feel from 2021-2023, both are slowing down now though and becoming more mainstream everyday and needing less tinkering everyday. Which is great but disapointing at the same time.
It doesn't feel like the same energy to me. Image generation was always "ok, maybe we release this for free or cheap now to see how people feel about it but sooner or later we're going to charge $$$" and 3d printing... I don't know, I think those guys are still doing their own thing. The barrier to get in is lower but there's still Luke warm interest to do so.
Same year for me. My college experience was a mix of PCU, Animal House, Hackers and Real Genius (ok not quite). I first saw email in a Pine terminal client. Netscape had been freshly ripped off from NCSA Mosaic at my alma mater UIUC the year before. Hacks, warez, mods, music and even Photoshop were being shared in public folders on the Mac LocalTalk network with MB/sec download speeds 4 years before Napster and 6 years before BitTorrent. Perl was the new hotness, and PHP wouldn't be mainstream until closer to 2000. Everyone and their grandma was writing HTML for $75/hr and eBay was injecting cash into young people's pockets (in a way that can't really be conveyed today except using Uber/Lyft and Bitcoin luck as examples) even though PayPal wouldn't be invented for another 4 years. Self-actualization felt within reach, 4 years before The Matrix and Fight Club hit theaters. To say that there was a feeling of endless possibility is an understatement.
So what went wrong in the ~30 years since? The wrong people won the internet lottery.
Instead of people who are visionaries like Tim Berners-Lee and Jimmy Wales working to pay it forward and give everyone access to the knowledge and resources they need to take us into the 21st century, we got Jeff Bezos and Elon Musk who sink capital into specific ego-driven goals, mostly their own.
What limited progress we see today happened in spite of tech, not because of it.
So everything we see around us, when viewed through this lens, is tainted:
- AI (only runs on GPUs not distributed high-multicore CPUs maintained by hobbyists)
- VR (delayed by the lack of R&D spending on LCDs and blue LEDs after the Dot Bomb)
- Smartphones (put desktop computing on the back burner for nearly 20 years)
- WiFi (locked down instead of run publicly as a peer to peer replacement for the internet backbone, creating a tragedy of the commons)
- 5G (again, locked down proprietary networks instead of free and public p2p)
- High speed internet (inaccessible for many due to protectionist lobbying efforts by ISP duopolies)
- Solar panels (delayed ~20 years due to the Bush v Gore decision and 30% Trump tariff)
- Electric vehicles (delayed ~20 years for similar reasons, see Who Killed the Electric Car)
- Lithium batteries (again delayed ~20 years, reaching mainstream mainly due to Obama's reelection in 2012)
- Amazon (a conglomeration of infrastructure that could have been public, see also Louis De Joy and the denial of electric vehicles for the US Postal Service)
- SpaceX (a symptom of the lack of NASA funding and R&D in science, see For All Mankind on Apple TV)
- CRISPR (delayed 10-20 years by the shuttering of R&D after the Dot Bomb, see also stem cell research delayed by concerns over abortion)
- Kickstarter (only allows a subset of endeavors, mainly art and video games)
- GoFundMe (a symptom of the lack of public healthcare in the US)
- Patreon (if it worked you'd be earning your primary income from it)
Had I won the internet lottery, my top goal would have been to reduce suffering in the world by open sourcing (and automating the production of) resources like education, food and raw materials. I would work towards curing all genetic diseases and increasing longevity. Protecting the environment. Reversing global warming. Etc etc etc.
The world's billionaires, CEOs and Wall Street execs do none of those things. The just roll profits into ever-increasing ventures maximizing greed and exploitation while they dodge their taxes.
Is it any wonder that the web tools we depend upon every day from the status quo become ever-more complex, separating us from our ability to get real work done? Or that all of the interesting websites require us to join or submit our emails and phone numbers? Or that academic papers are hidden behind paywalls? Or that social networks and electronic devices are eavesdropping on our conversations?
I never loved the original Far Cry as a player, but I did deeply appreciate it as a game designer.
I was working as a game programmer and technical designer on a big budget FPS back when the original Half-Life was released, and immediately "AI AI AI!!!!!" became a stifling buzzword and thought-terminating (ironically) slogan, heavily reorienting how people thought about shooter design and, essentially, ending boomer shooters as a thing for a good long while and ushering in the era of Halo, Call of Duty, cover-based shooters, and so on.
I happened to adore boomer shooters and have good taste for their rhythms and making them, so the transition is not one I personally enjoyed at all.
But worse in a way, Half-Life ALSO ushered in much more linearity in level design because of their awesome interactive set pieces and the particular interactive way they got across their story. Certainly that was the way its release was experienced in the studio I was in, anyway. Less sprawling, unfolding, and backtracking like in Doom (where the space unfolds over the course of a level in something like a fractal way), more following a string of pearls and hitting all the scripted events, like a haunted house ride. You didn't want the players getting lost, you didn't want them to get bored backtracking, and you didn't want them to miss any of the cool one-off scripted content you'd made for them.
(I love Half-Life, so I don't blame it for any of this - it's a vastly more interesting game than many of the games it inspired, which I think is typical of highly innovative games)
At the time, I wasn't quite yet a thoughtful enough, perceptive enough game designer to recognize how deeply in tension those two changes ended up being with each other. And so I spent a miserable year of eventual burnout trying to make "good enemy AI that players actively notice" as a programmer for a game whose levels kept getting progressively tighter, more linear, and more constrained to support the haunted house ride of scripted events.
As a point of contrast, games like Thief and Thief 2 were magnificently structured for good, cool AI that players could notice, and it was specifically because of the non-linear ways the levels were built, the slow speed of the game, the focus on overtly drawing attention to both player and enemy sense information, and the relationship between the amount of space available to players at any given point to the amount of enemies they faced, as well as the often longer length of time players engaged with any particular set of enemies... and of course, despite all these cool features, poor, poor Thief was released to store shelves just 11 or 12 days after the original Half-Life. Business advice 101 is don't release your first person game 11 or 12 days after Half-Life.
Anyway, that all leads in to my admiration for Far Cry's design. Their outdoor levels actually steered their game design in a direction that could let enemy AI breathe and be an interesting feature of their design, in turn giving players higher level choices about when, where, and how to make initiate fights. In that sense, it reminded me of where Thief had already gone previously, but in the context of a high profile shooter. But doing that required actively relinquishing the control of the haunted house ride-style of pacing, which I think was kind of brave at the time.
I opened the collection of links, which is quite good if a bit old. But then I had a subconscious mental itch, and thought, wait... where had I heard the name mrelusive before? That sounds _really_ familiar.
And then I remembered - oh, right, mrelusive, JP-what's-his-name. I've read a huge amount of his code. When I was working on Quake4 as a game programmer and technical designer, he was writing a truly prodigious amount of code in Doom 3 that we kept getting in code updates that I was downstream of.
And he was obviously a terrifically smart guy, that was clear.
But I had cut my teeth on Carmack's style of game code while working in earlier engines. Carmack's style of game code did, and still does, heavily resonate with my personal sensibilities as a game maker. I'm not sure if that particular style of code was influenced by id's time working with Objective-C and NeXTStep in their earlier editors, but I've long suspected it might have been - writing this comment reminds me I'd been meaning to explore that history.
Anyway, idTech4's actual game (non-rendering) code was much less influenced by Carmack, and was written in a distinctly MFC-style of C++, with a giant, brittle, scope-bleeding inheritance hierarchy. And my experience with it was pretty vexed compared to earlier engines. I ultimately left the team for a bunch of different reasons a while before Quake4 shipped, and it's the AAA game I had the least impact on by a wide margin.
I was thinking about all this as I was poking over the website, toying with the idea of writing something longer about the general topics. Might make a good HN comment, I thought...
But then I noticed that everything on his site was frozen in amber sometime around 2015... which made me uneasy. And sure enough, J.M.P. van Waveren died of cancer back in 2017 at age 39. He was a month younger than me.
I didn't really know him except through his code and forwards from other team members who were interacting with id more directly at the time. But what an incredible loss.
Just wanna say that I loved Soldier of Fortune. Lots of FPSs around that time felt really like and plastic-y. SoF was one of the few that made shooting a gun feel satisfying and visceral (and I’m not even talking about the gore, I played the censored version).
Thanks! That's really cool to hear, 20+ years later.
I actually did all the effects work on the weapons (muzzleflashes, smoke, explosions, bullet holes and surface sprays, and all the extensions to Quake2's particle systems to make that content possible) and all the single player weapons system game balancing, as a matter of fact. Both the sound design and animation / modelling on the weapons went through a number of iterations to get them really over-the-top and delightfully powerful / ridiculous, too - I was lucky to work closely with a great sound designer and a really talented animator on that.
Well, as I say, I'd been meaning to look into this in more detail because it's something I'd been long curious about. But I don't think I have time right now to dig into it.
Really enjoyed this comment--thanks for sharing. Game development really sounds like such a different beast from standard line-of-business programming. Always enjoy hearing stories about it and reading books about it (Masters of Doom comes to mind).
Thanks! I've been thinking a lot recently about maybe getting some of my own stories down. The late 90's were a really fascinating time to be in games.
And I loved Masters of Doom, too, although it was weird reading it and occasionally seeing people I knew show up in it, briefly.
You should write them down. I would love to read them, and I'm positive many others would, too. The 90s gaming scene is incredibly fascinating to read about, especially as it started to shift from the cowboy ethic to the corporate ethic (both have their pros and cons). I think I speak for a lot of us when I say we'd love to hear what you have to share.
I ... hmm. My memory is really, really dusty on that.
I remember I had a handful of conversations with Bryan Dube during development about Q4 deathmatch. He was a super sharp game programmer / technical designer who had done a ton of the work on Soldier of Fortune 2 deathmatch previously, and had worked on the Urban Terror mod for Quake 3 before that. And he was much more focused on multiplayer than I was.
We talked a lot about weapons (as I had done most of the code side work on weapons in Soldier of Fortune), but I'm now remembering him being really keen at the time on adding more high skill play to Q4 deathmatch. We all loved rocket jumping, and I remember him really wanting to add other kinds of high skill movement.
So that much I definitely remember. More than that and my memory is kind of fuzzy. To be honest, lots of team members loved deathmatch and Quake, and all of us were of course talking about gameplay possibilities all the time, so it's possible the idea originated somewhere else on the team.
Ah, yeah. If you go through the code there, Bryan had to change that file in quite a few places to implement crouch sliding.
Nick Maggoire is an animator. Super nice guy. No inside voice at all. I shared an office with him for a while :) I haven't kept up with him, but after Q4 he left for Valve where he's been ever since, it seems.
That comment strongly suggests to me that Nick did the animations for players crouch sliding. Or, that's my hunch anyway.
Fast forward 28 years later, and now everyone has an amazing TV in their pocket at all times when they commute, sit in their work space, go out for coffee or lunch, or go sit down in the bathroom, all with a near infinite collection of video via youtube, netflix, and even massive amounts of porn. How little did I know. And that's to say nothing of texting and twitter and reddit and instant messaging and discord and ...
Several years ago, I was working on a college campus, and there were giant corporate-flavored murals beside some of the city blocks students walked, full of happy multicultural clip art people and exciting innovative technological innovation, and adorned with the message, "Imagine a borderless world!" Clearly that message was meant to be rhetorical, not a call to reflection, critique, or reevaluation. There did not seem to be the suggestion that one might imagine the borderless world and then, having done so, decide it was a problem to be corrected.
I wonder a lot, these days, if we're not deep into a Chesterton's Fence situation, where we have to rediscover the hard way the older wisdom about having separate spheres with separate hard constraints and boundaries on behaviors, communities, and communication pathways to facilitate all sorts of important activities that simply don't happen otherwise - something like borders and boundaries as a crucial social technology, specifically about directing attention productively. Phones and tablets are, in their own Turing complete way, portals to a borderless world that pierces the older intentional classroom boundaries.