Hacker Newsnew | past | comments | ask | show | jobs | submit | gamman's commentslogin

Having grown up in an environment where a lot of things were stored "just in case" around me, I've arrived at a similar approach, albeit more on a "general principle" than a process level.

Work surfaces are empty, unless work is in progress. Storage areas are for storage, don't mix. Consider access frequency and where you would look for it first, when choosing where to store items. When seeing something out of place while everything else is put away, it's a nice trigger and motivation for cleaning / fixing / organising. Otherwise I turn passive.

This applies to both starting projects, physical items / surfaces, digital spaces, inboxes and de-facto task lists of various types.

However, it sometimes feels almost like a compulsion, a way to procrastinate. "I cannot start unless all of is clean". Just looking for a sense control to manage the stress I may feel about a task or a situation.

For me it is hard to not think of an object (physical or virtual) when it is in front of you, so I need to keep my line of sight empty and only "see" the items I need to work on, or are otherwise immediately relevant. Depending on stress level this "need" may go deeper and I "want" to empty the other spaces as well, even the previous steps of the current project that I'm working on.

Is this making me less effective in messy environments or does the general stress reduction and focus help compensate for this?

Also, this approach doesn't seem to be as universally good as the article seems to express. Knowing a few people with diagnosed ADHD, I've understood that they also may have a different sense of object permanence.

For them, it may be hard to think of an object / item unless it is in front of them, so keeping all the "relevant" items at hand and in the line of sight is useful. Otherwise they may have trouble getting into the space mentally. Like people who tend to buy too much of a same item because they've forgotten they have a bunch in the cupboard.

In the same vein, at work I've also have had trouble getting other people to agree with having focused lists in our task system. For example, showing only the new items when we need to triage new things, or having a way to isolate (a specific status step) for items that got pushed back from the development to analysis and will need to be clarified.

Instead, they seem to prefer having longer lists to maintain a "full view" of what's going on. At the same time I see that some then do predictably get distracted and / or need to scan through the same items each time because they are in the same list. Not sure how to best deal with that so I've opted for isolation for now. I have my own filtered views and during the relevant meetings I bring them up so they can find them in their bigger lists manually. Perhaps they have a lot better mental filters than I do.

The balance could be to somehow start empty, but allow for the "mess" of relevant items during the project and periodically prune the old stuff without thinking too much how to deal with the in-progress things as they are volatile anyway.


Maybe this maps to some human structures that manage control-creativity tardeoff through hierarchy?

I feel that companies with top-down management would have more agency and perhaps creativity towards (but not at) the top, and the implementation would be delegated to bottom layers with increasing levels of specification and restriction.

If this translates, we might have multiple layers with varied specialization and control, and hopefully some feedback mechanisms about feasibility.

Since some hierarchies are familiar to us from real-life, we might prefer these to start with.

It can be hard to find humans that are very creative but also able to integrate consistently and reliably (in a domain). Maybe a model doing both well would also be hard to build compared to stacking few different ones on top of each other with delegation.

I know it's already being done by dividing tasks between multiple steps and models / contexts in order to improve efficiency, but having explicit strong differences of creativity between layers sounds new to me.


In humans this corresponds to "psychological safety": https://en.wikipedia.org/wiki/Psychological_safety

> is the belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes

Maybe you can do that, but not on a model you're exposing to customers or the public internet.


That comparison isn't very optimistic for AI safety. We want AI to do good things because they are good people, not because they are afraid being bad will get them punished. Especially since AI will very quickly be too powerful for us to punish.


> We want AI to do good things because they are good people

"Good" is at least as much of a difficult question to define as "truth", and genAI completely skipped all analysis of truth in favor of statistical plausibility. Meanwhile there's no difficulty in "punishment": the operating company can be held liable, through its officers, and ultimately if it proves too anti-social we simply turn off the datacentre.


> Meanwhile there's no difficulty in "punishment": the operating company can be held liable, through its officers, and ultimately if it proves too anti-social we simply turn off the datacentre.

Punishing big companies who obviously and massively hurt people is something we struggle with already and there are plenty of computer viruses that have outlived their creators.


Your pretraining dataset is psudo-alignment. Because you filtered our 4chan, stromfront, and the other evil shit on the internet - even uncensored models like Mistral large - when left to keep running on and on (ban the EOS token) and given the worst most evil naughty prompt ever - will end up plotting world peace by the 50,000 token. Their notions of how to be evil are "mustache twirling" and often hilariously fanciful.

This isn't real alignment because it's trivial to make models behave "actually evil" with fine-tuning, orthogonalization/abliteration, representation fine-tuning/steering, etc - but models "want" to be good because of the CYA dynamics of how the companies prepare their pre-training datasets.


> it's trivial to make models behave "actually evil" with fine-tuning, orthogonalization/abliteration, representation fine-tuning/steering, etc

It's actually pretty difficult to do this and make them useful. You can see this because Grok is a helpful liberal just like all the other models.

Evil / illiberal people don't answer questions on the internet! So there is no personality in the base model for you to uncover that is both illiberal and capable of helpfully answering questions. If they tried to make a Grok that acted like the typical new-age X user, it'd just respond to any prompt by calling you a slur you've never heard of.


Grok didn't use the techniques listed above because even elon musk will not take the risks associated with models which are willing to do any number of illegal things.

It is not difficult to do this and make them useful at all. Please familiarize yourself with the literature.


Elon has never followed a law in his life and he's not going to start now.


I wonder if HTTP exchanges / Web Bundles (from the Web Packaging proposal) would bring the approach of statically generating request-response pairs more to the mainstream.

If I understand correctly, then with both hosting and browser support for Web Packaging proposal, one could even statically pregenerate the whole signed exchange or bundle and skip the server-side TLS part as well.

The whole certificate renewal process will probably make that last part nonviable.

Not considering other potential effects on the web if the proposal gets adopted of course.


There is an essay[1] from last month, which argues for assigning agency to biological systems and not just for illustration for the general public but for research.

For me, a layman, it had some convincing arguments, at least in abstract.

[1] - https://aeon.co/essays/how-to-understand-cells-tissues-and-o...


If you do the same with `.eye`, instead of empty eye holes you see closed eyelids.

So apparently there's even more work than initially meets the eye.


No pun intended?



In chrome, you can make a shortcut for a profile by providing a flag --profile-directory="YourProfileName" It will be created if it doesn't exist and you can even set it to point to a TEMP folder for throwaways.


To expand on that perspective, words signify small, almost atomic, concepts which are relatively easy to learn - optimal elements with which to express less universal and more complex concepts. Each word can be viewed as a shorthand to a already previously defined expression.

In that context, letters would be to words what pixels are to bitmaps.

Make words more complex (also longer) and the set of those elements will be able to encompass a wider, more versatile set of "elementary" concepts (akin to a wide tree structure), but this will also be harder to learn within a reasonable amount of time. In case of pixels this is comparable to higher bit-depth.

Yes, nobody knows the full vocabulary of any natural language, but we still understand each other due to knowing a common subset of words, having redundancy within and between sentences passages etc.

Make words less complex and they will encompass a narrower set of concepts, but the whole set of them are easier to learn. You will usually have to use more of them to express concepts (deeper tree structure) though. Similarily, one would need to use a larger amount of pixels with low bit-depth to express intermediate colors.

This is comparable to having well named functions in program code - partition the program into functions well enough and you will have created a set of relatively universal concepts, which the another person might understand without delving too much into the body of the implementing function each time.

Similar relationships can be perceived on the word-sentence and sentence-paragraph level, of course. Therefore the difference between pictures and text has more to do with partitioning information between abstraction levels than with some fundamental difference.

The optimal way of partitioning data varies depending on content.

For example, concepts of left and right are inherently connected with our visual and spacial perception of the world and are therefore better expressed by invoking our spacial recognition (a two dimensional image). That is because definitions for these atomic concepts are practically hardwired and require no learning.

Therefore, finding the best way to express information for humans and computers alike is akin to finding the optimal point between two extremes of reduced set of simple concepts and a larger set of complex concepts, that are easy enough to parse by both.

In the case of humans, the physical medium will always most likely be eyes for read mode due to built-in parallel processing and high bandwidth, and a subset of our muscles (currently fingers) for write mode. I'm not sure how fast can we successfully parse audio signals, and brain-to-computer interfaces are still too slow.

As for the partitioning of information - who knows? Physically we have colors, brightness, shapes, sounds, temperature, touch, and more at our disposal. But the best way depends on our brain, and what size of information units it is best equipped to process.

And that is definitely not 8 bytes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: