Hacker Newsnew | past | comments | ask | show | jobs | submit | annywhey's commentslogin

Most of the effort in making a language a production language is on the tools and libraries end of things, and LSP is sort of the tip of the iceberg in terms of getting into that stuff. Before that, you probably want to have features like "good error messages" or "working string and math libraries".

For a long period in the past 15-odd years, new "Web" languages were getting plenty of adoption because the state of the tooling for that segment remained barebones everywhere, with a lot of functionality already in SQL or JS and anything in between being glue, and so competition on language features and syntax took precedent. It's probably in a consolidation phase now - things are getting more exciting in the lower layers of the stack instead.


The silver lining of this cloud is that it acts as a selection force on institutions, too: If people are harder to pacify, institutions have to step up their game and deliver, or failing that, market themselves better. And that parallels a broad trend that's been in place since antiquity: building sustainable institutions instead of succumbing to warlords and despots. Legal codes, religious orders, and so forth have built up a vast underlying structure to face the ordinary challenges of humanity at its worst. There's nothing to suggest that that trend ends because we have some new gadgets.

But it takes place as a reaction, a series of rapid cultural changes. I would say that we had such a shift take place post-2008: besides the economy, the smartphone era took off and everyone since then has contended with a new status quo of limited privacy, temporary status, broad-not-deep social networks, and a constant background noise of gossip and scandal. Have we gotten better at navigating this world since 2008? Absolutely, I would say. In the first four-to-five years we had a whole bunch of theories about a massively connected world get tested in reality, culminating in stories such as Anonymous, Wikileaks, Arab Spring, Occupy, Black Lives Matter, and Gamergate. The years since then have seen various reactions to those stories play out as one major figure after another gets embroiled in scandal.

Even if this is fostered by state actors, the overall effect is one of "boiling down" institutions to their basic premise, where they are easier to challenge, as the arrangements that locked them in before get severed.

And I think the public recognizes that to some degree - the low empathy comes in combination with a renewed interest in a private approach to philosophy, rather than a collective one - a sense that existing institutions fundamentally don't have the right answers and something has to be done. We're merely acting in accordance with the times.


1. Philosophers derive measures of truth from theories of truth. To accept the measure you have to accept the theory. If you question all the theories you simply arrive at well-trodden metaphysical debates within philosophy.

2. Argument over progress in philosophy is a debate borne of logical positivism, and it valorizes the goal of accumulating one ever-larger shared pool of knowledge as the only one that matters. But why does it matter?

Throughout history philosophers have often been more interested in personal development than grand collaboration. Empiricism just happens to have this property of creating tangible objects that others may study, which makes it apparently dominant in a world dominated by "whiz-bang" science - we're always looking at the next big thing. But merely knowing physical properties of the world does nothing to inform us of how to use them in a productive manner. That's where philosophy will perpetually re-enter as a way to add holistic breadth: the principles are often old, but they need a new adaptation to the circumstances brought about by technical changes.

And that also means that philosophers tend to be unpopular. If their nature is always to question how we're doing things, the natural consequence is that they are outcast for "rocking the boat". This is why it is important to draw some distinction between philosophy, the contemporary academic job title, and philosophy when it is practiced by ordinary people every day. The former is a strange outgrowth of the post-Enlightenment enterprise, the latter is something anyone can claim. Defining oneself as "artist", "maker", "entrepreneur" - these ideas encode philosophical purpose.


The thing I've seen with estimation practice last I researched it is, it works best if you can calibrate. That's something an established shop can do by extrapolating from their previous work, but it is also often as simple as "This other team took 7 months to do a similar thing. Therefore we will also take 7 months." An estimate like that is usually only wrong by days-to-weeks, since it encompasses all phases, eliminating the fudge-factor, unknown-unknowns and wishful-thinking aspects.

When it takes much longer, it's almost always due to design issues or political issues that create design issues. When the design is well-understood(and prototyping is hugely important to finishing design ASAP) the implementation goes smoothly. When stakeholders take turns stirring the pot to "make their mark", it goes haywire very quickly.


I believe most of the FUD is, in fact, coming from those rivals.

Brave proposes cutting out a bunch of middlemen through the token market and making the browser something like an anticheat system: opt in and it does its best to serve quality ads while preventing click fraud.

There are a lot of details in the execution that matter to make this competitive, but the basic idea resolves many of the current conflicts of interest that make adtech a miserable market.


It's astounding how many willing employees Google has to comment voluntarily, out of their own awe of the mothership, in favor of anything Alphabet does. They're so many, it's like machine learning, Google barely has to try or know what's going on.

I always have to remind myself how poisoned the well of tech commentary has become with behemoths like FB, GOOG, AMZN.


Here, watch this:

https://youtu.be/ffryiLmQb1U

This is Fortnite build battling. BR as a whole just refers to a game ruleset that forces and encourages a last-man-standing situation. That part has been done before there was a video game. But the details of what's being communicated are different. They're different even between Rust and PUBG, despite both of them being military styled. Fortnite is in a class of its own since it's cartoonish and the building and mobility options present a lot of unique options. Even comparisons to Minecraft don't fit since the build system is different.

Performing jadedness about the games being "done before" is basically like saying new movies are just collections of old cliches. Even when they are, they end up saying something different.


I think this one is mostly reflective of the risk/reward payoffs of aggressive and unusual play in any particular game. Play to win means - force the confrontation, take gambles, push your way into a surprise advantage. Play to not lose means - lean on your techniques, stay in control of the situation, build an advantage gradually.

And for most games, most of the time, playing to not lose is a more reliable way to win, because it is more consistently rewarding to practice, even though it can result in a boring risk-assessed playstyle. Playing to win is an emotionally-driven way to do things and good technique will usually counter it, but every once in a while, late in a match, the mask slips and there's an opportunity to make a play on an overwhelmed opponent.


Although I'm not intimate with the BBC hardware this kind of thing doesn't necessarily indicate a hardware mode switch - the renderer could use double width pixels in the HUD simply to lower draw times(simpler rasterization loop) and conserve memory.

AFAIK really complex scanline tricks mostly appeared on the Atari and Amiga platforms first because they exposed explicit programmability that made it simple to define exactly what you wanted the memory layout to map to. Other hardware could do similar things, but needed to code a cycle-exact loop - a feat that was certainly known in the 80's, but also relied on a good understanding of the timings of all the hardware.


It definitely switches. There's four 1MHz timers and a vsync interrupt, so this stuff is very easy to arrange and the timing doesn't have to be very precise. (Ideally you'd leave more of a gap than Elite did, though, because it takes basically an entire scanline to do the mode switch and reprogram the palette.)

Try it on an emulator: https://bbc.godbolt.org/ - Elite is the default disc. To load: hold Shift, tap F12, release shift. Once the game is loaded and you see the animating ship, hold F12. The mode-switching interrupt stops running while it's held down (that key is wired to the reset pin on the real hardware), and you'll see the entire display incorrectly scanned out in just one of the modes.


In the case of Elite for BBC Micro I know it for certain that it does a mid-screen mode switch though ;) I remembered that Matt Godbolt mentioned this in one of his talks or blog posts when describing his JSBeep emulator, but couldn't find the reference unfortunately.

But anyway, it's also described here:

http://beebwiki.mdfs.net/MODE_5

The Amstrad CPC could also do this type of effect, although it could not freely position the raster interrupt. There are newer demos though that go quite crazy with reprogramming the display mid-screen, for instance:

https://floooh.github.io/tiny8bit/cpc.html?file=cpc/dtc.sna


What you are looking for here is primarily a feedback mechanism issue. This is something that is a hard problem, since good, useful feedback is never quite as simple as just asking someone. Employees are looking for it. Managers are looking for it. Executives are looking for it.

With respect to work time and workload there's always an element of presentism - if "enough" surfaced activity happens over the course of each week and the work is not obviously deficient, then most managers won't ask questions. And you already know that much, since you had many years of school to get that idea in your head. And when it's on-site, it's simple to get to that level - if you're in the office, you're at work, even if it isn't time-on-task.

But there is always going to be a level beyond that, of taking on tasks that are a good combination of "builds up a career" and "builds up the company" - stuff where you can act more independently, and likewise fail independently. You can easily fall into work that does neither or only one of those things, or convince yourself into workaholic behavior and sacrifice everything else. Nobody can offer a clear bright line of "this is the best possible course of action". But there is a "better" out there somewhere!

So if you can measure yourself by whether you did something of both career and company and other life things each day, then you already have a more balanced self-measurement than "quantities of issues" or "lines of code" or "hours in seat".


I literally sat down last spring and focused on this problem with a bit of brainstorming and philosophizing.

My goal was to figure out what parts of me added up to being a coherent whole as a person, at least w/r to career. (With other aspects, too, but mostly "what should I spend my time on, really?")

And this kind of thing is never wholly about skills, interests and talent - they are starting places, but personality and natural tendencies were a much bigger factor in my judgment.

And it helped. I found some insights about what I could/wanted to do, my personal principles, and how I might put these things together in the form of a career - like, ah, yes, I like teaching things and advising, but no, I don't want to do it in the form of formal education. And so on, probing different aspects of the process and teasing out what was most and least painful and working towards "mostly painless so that I can really focus on perfecting it".

Then over the summer I tested out some of these ambitions and found where I was and wasn't limited - like, if I wanted to teach, could I do it by streaming? After working through the logistics I realized that I had something more like lectures in mind, which would communicate better as video. Then looked into video editing and polishing up my voice work and realized that I needed more than one format to do everything I wanted. So I decided that I would have a "multimedia blog" with writing coming first, but lots of images and supplementary content.

So I started on that, and have posted a few articles and shared them(mostly with friends). I don't post at a high rate but it is proving to be a good way of directing my writing energy away from comments like this one(hmm...)

Since the season has changed, I'm probably due to try evaluating myself again and looking for another way to inch forward and focus on what topics and techniques I'm meshing well with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: