Hacker Newsnew | past | comments | ask | show | jobs | submit | fizzynut's commentslogin

Honestly I gave up trying to support apple products a while ago - the fact that iOS and Mac lock the browser version to the os version makes it such a royal pain in the ass to support.

To be fair, the macOS Safari is not tied to the OS. Don't remember if it ever was, but it def isn't anymore then.

Unfortunately it was and still is. https://en.wikipedia.org/wiki/Safari_(web_browser)#Version_c....

MacOs is slightly more forgiving in that the last 2 versions can get the latest safari. However, people tend to keep a computer a lot longer than a phone and many don't or can't update macOS, so it's not much better.


It's absolutely amazing the degree to which Apple has recapitulated the Internet Explorer debacle of thirty years ago.

Honestly if you actually need high end specs then you should just build a PC.

"16 core Zen 5 CPU, 40 core RDNA 3.5 GPU. 64GB of LPDDR5X RAM @ 256 GB/s + stunning OLED" - Easily done as a pc build.

In a world where you can get this laptop with Linux, there's a new set of trade-offs -

- be prepared for a LOT of things not working because the size of the market for extremely expensive configurations with high end CPU + GPU + RAM + Monitor + Linux is practically zero.

- when closing the lid and walking to the coffee shop will the battery be dead before you finish your coffee? probably

- will a new GPU/GPU architecture be a headache for the first X years...yes, and if you want to replace every 2 years, I guess you will have a permanent headache.

- will updating graphics drivers be a problem? yes

- is the text in your "stunning oled" going to be rendered correctly in linux? probably not

- will the wifi chip work in linux? maybe

- will all the ports work/behave? probably not

- will your machine perform worse than a high end PC that cost 1/2 as much from 3 years ago... yes.


Is his build even possible today in a laptop?

In a desktop, you would need a top of the line threadripper for that 256GB/s of memory bandwidth.

Consumer grade Zen 5 desktops reach only about 80GB/s in real world testing, with a theoretical max of slightly over 100GB/s.


AMD Strix Halo (a consumer mobile processor) has theoretical support for 256GB/s of memory bandwidth (quad-channel, 8000 MT/s LPDDR5X, must be soldered, supports 128GB at most).


The memory is shared for the GPU, so you should probably compare with desktop GPU, so 1-2TB/s.


Yes. OneXfly apex. Amd 395+, oled panel.


I think the point of him making his own laptop is that he would fix all those software problems.


> - when closing the lid and walking to the coffee shop will the battery be dead before you finish your coffee? probably

Why probably? Going to sleep on lid close is common enough, it's even the default in all OSes/DEs. If you turn off sleep-on-close and drain the battery, that's on you.

> - is the text in your "stunning oled" going to be rendered correctly in linux? probably not

> - will the wifi chip work in linux? maybe

> - will all the ports work/behave? probably not

These seem like odd things to doubt, when Framework has a perfectly working system for Linux and has been doing it for years. No hardware in their systems is unsupported in Linux.

Notably the critique of Framework in the original blog post does not offer these doubts. They are focused instead on the hardware design and tradeoffs between upgradability and uniform bodies. Those are real tradeoffs and Framework cannot solve them all without abandoning the upgradability part.


High end machines which can easily pull 100W+ are just bad in general for portability - running at max will last less than 1Hr, will the sleep mode actually work reliably and not drain battery? - this is an issue in most OSes/laptops. Will video playback in the browser not be properly hardware accelerated and drain the battery super fast? yes linux has issues here.

Framework were explicitly ruled out, so: Integrated Oled - you really want some integration, If you can't set the brightness, goodbye lifespan, oled also have many different subpixel layouts which can make the text blurry/fringe, maybe you wont notice but then why buy an oled in the first place for work? While a monitor will definitely have pixel shift/burn in protection built in, if integrating a panel into the laptop without putting in any work, that support might not come out of the box

Even if it was a framework, everything is distro specific, but I think you only need to know that a "dock megathread" exists to realise that "perfectly working" is a stretch and a lot of people have hardware they can't connect and doesn't work.

That said if I was to buy a laptop - a mid end framework I just do the basics with would probably be great.


Doing that will increase input latency, not decrease it.

There are many tick rates that happen at the same time in a game, but generally grabbing the latest input at the last possible moment before updating the camera position/rotation is the best way to reduce latency.

It doesn't matter if you're processing input at 1000hz if the rendered output is going to have 16ms of latency embedded in it. If you can render the game in 1ms then the image generated has 1ms of latency embedded in to it.

In a magical ideal world if you know how long a frame is going to take to render, you could schedule it to execute at a specific time to minimise input latency, but it introduces a lot of other problems like both being very vulnerable to jitter and also software scheduling is jittery.


This would get an error message in C, what are you talking about?


The huge plus of the internet is that you can be disruptive on a global scale on a somewhat even footing to the giants.

If you place a giant burden such that before you even do anything of value you need to conform to 100s of different laws/regulations from 100 different countries you create a world where only large companies can exist and everyone else is pushed out.


Isn't that the goal of these nonsense laws?


A new feature that fundamentally changes the way a lot of code is structured.

A group of features that only combined produce a measurable output, but each one does not work without the others.

A feature that will break a lot of things but needs to be merged now so that we have time for everyone to work on fixing the actual problems before deadline X as it is constantly conflicting every day and we need to spend time on fixing the actual issues, not fixing conflicts.


The depth is 32 bit, not the index into the file.

If you are nesting 2 Billion times in a row ( at minimum this means repeat { 2 billion times followed by a value before } another 2 billion times. You have messed up.

You have 4GB of "padding"...at minimum.

You file is going to be Petabytes in size for this to make any sense.

You are using a terrible format for whatever you are doing.

You are going to need a completely custom parser because nothing will fit in memory. I don't care how much RAM you have.

Simply accessing an element means traversing a nested object 2 billion times in probably any parser in the world is going to take somewhere between minutes and weeks per access.

All that is going to happen in this program is a crash.

I appreciate that people want to have some pointless if(depth > 0) check everywhere, but if your depth is anywhere north of million in any real world program, something messed up a long long time ago, never mind waiting until it hits 2 billion.


> I appreciate that people want to have some pointless if(depth > 0) check everywhere

An after the fact check would be the wrong way to deal with UB, you'd need to check for < INT_MAX before the increment in order to avoid it.


Ai generated slop. Constantly summarising various parts of the memory hierarchy, graphs with no x axis, bad units, no real world examples, the final conclusion doesn't match the previous 10 summaries.

The big problem is that it misses a lot of nuisance. If actually try to treat an SSD like ram and you randomly read and or write 4 bytes of data that isn't in a ram cache you will get performance measured in the kilobytes per second, so literally 1,000,000 x worse performance. The only way you get good SSD performance is reading or writing large enough sequential chunks.

Generally random read/write for a small number of bytes is similar cost to a large chunk. If you're constantly hammering an SSD for a long time, the performance numbers also tank, and if that happens your application which was already under load can stall in truly horrible ways.

This also ignores write endurance, any data that has a lifetime measured in say minutes should be in ram, otherwise you can kill an SSD pretty quick.


SSDs have so many cases of odd behaviour. If you limit to writing drive sector chunks, so 4k, then at some point you will run into erase issues because the flash erase size is considerably larger than the 4k sectors. But you also run into hitting the limits of the memory buffer and the amount of fast SLC as well which limits the long term sustained write speed. There are lots of these barriers you can break through and watch performance drop sharply and its all implemented differently in each model.


Yes, it can be quite brand/technology specific, but chunk sizes of 4/8/16/etc MB usually work much better for SSDs, but the only data I've found to read/write that easily lines up with those chunk sizes are things like video/textures/etc or cache buffers you fill in ram then write out in chunks.


This from exprience or any sources on what's sane to use today? Building a niched DB and "larger-blocks" has been design direction, but how "far" to go has been a nagging question (Also are log-structured things still a benefit?).


You are also going to cause a lot of write amplification with bigger blocks and at some point its also going to limit your performance as well. What really makes this hard is it depends on how filled the drive is, how heavily the drive is utilised and for how much of the day. Time to garbage collect results in different performance to not.

When you start trying to design tools to use SSDs optimally you find its heavily dependent on use patterns making it very hard to do this in a portable way or one that accounts for changes in the business.


This project is not "business" bound, it's a DB abstraction so business concerns are layered outside of it (but it's a worthwhile pursuit since it rethinks some aspects I haven't seen elsewhere in all the years of DB announcements here and elsewhere).

And yes, write amplification is one major concern but the question is that considering how hardware has changed, how does one design to avoid it. Our classic 512byte, 4k,etc block sizes seems long gone and does the systems "magically" hide it or do we end up with unseen write amplification instead?


You should probably just use a quad tree to put your objects into and traverse that with a path finding algorithm.


Yeah, I thought about that too, but I'm also trying to keep pre-processing work as light as possible.


From zooming into your clip both ASCII and Unicode are wrong:

- ASCII is off center ~43/50 pixel margins

- Unicode is off center ~20/25 pixel margins

- Both have different margin sizes

- The button sizes of both are the same.

- The Hide button is offset from both 8/10/16 selector and ascii/unicode buttons

- Even if everything was correct, because there is no contrast between "Off" and background, it's going to look wrong anyway


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: