Hacker Newsnew | past | comments | ask | show | jobs | submit | byroot's commentslogin

Nitpick, but France never left NATO proper, only the integrated command, and reintegrated it in 2008 under Sarkozy.

That's also something Rails helps abstract away by automatically deferring enqueues to after the transaction completed.

Even SolidQueue behave that way by default.

https://github.com/rails/rails/pull/51426


> The ideal situation with Rails would be if there is a simple way to switch back to Redis

That's largely the case.

Rails provide an abstracted API for jobs (Active Job). Of course some application do depend on queue implementation specific features, but for the general case, you just need to update your config to switch over (and of course handle draining the old queue).


You are describing bootsnap.

And yes I proposed to integrate bootsnap into bundler ages ago, but got told to go away.


Perhaps its time to try again - bootsnap is definitely stable enough now, which it really wasn't early on.


Ye,s but if your CI isn't terrible, you have the dependencies cached, so that subsequent runs are almost instant, and more importantly, you don't have a hard dependency on a third party service.

The reason for speeding up bundler isn't CI, it's newcomer experience. `bundle install` is the overwhelming majority of the duration of `rails new`.


> Ye,s but if your CI isn't terrible, you have the dependencies cached, so that subsequent runs are almost instant, and more importantly, you don't have a hard dependency on a third party service.

I’d wager the majority of CI usage fits your bill of “terrible”. No provider provides OOTB caching in my experience, and I’ve worked with multiple in house providers, Jenkins, teamcity, GHA, buildkite.


GHA with the `setup-ruby` action will cache gems.

Buildkite can be used in tons of different ways, but it's common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.


> GHA with the `setup-ruby` action will cache gems.

Caching is a great word - it only means what we want it to mean. My experience with GHA default caches is that it’s absolutely dog slow.

> Buildkite can be used in tons of different ways, but it's common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.

The only way docker caching works is if you have a persistent host. That’s certainly not most setups. It can be done, but if you have that running in docker doesn’t gain you much at all you’d see the same caching speed up if you just ran it on the host machine directly.


> My experience with GHA default caches is that it’s absolutely dog slow.

GHA is definitely far from the best, but it works:, e.g 1.4 seconds to restore 27 dependencies https://github.com/redis-rb/redis-client/actions/runs/205191...

> The only way docker caching works is if you have a persistent host.

You can pull the cache when the build host spawns, but yes, if you want to build efficiently, you can't use ephemeral builders.

But overall that discussion isn't very interesting because Buildkite is more a kit to build a CI than a CI, so it's on you to figure out caching.

So I'll just reiterate my main point: a CI system must provide a workable caching mechanism if it want to be both snappy and reliable.

I've worked for over a decade on one of the biggest Rails application in existence, and restoring the 800ish gems from cache was a matter of a handful of seconds. And when rubygems.org had to yank a critical gem for copyright reasons [0], we continued building and shipping without disruption while other companies with bad CIs were all sitting ducks for multiple days.

[0] https://github.com/rails/marcel/issues/23


> So I'll just reiterate my main point: a CI system must provide a workable caching mechanism if it want to be both snappy and reliable.

The problem is that none of the providers really do this out of the box. GHA kind of does it, but unless you run the runners yourself you’re still pulling it from somewhere remotely.

> I've worked for over a decade on one of the biggest Rails application in existence, and restoring the 800ish gems from cache was a matter of a handful of seconds.

I kind of suspected - the vast majority of orgs don’t have a team of people who can run this kind of a system. Most places with 10-20 devs (which was roughly the size of the team that ran the builds at our last org) have some sort of script, running on cheap as hell runners and they’re not running mirrors and baking base images on dependency changes.


> none of the providers really do this out of the box

CircleCI does. And I'm sure many others.


> My experience with GHA default caches is that it’s absolutely dog slow.

For reference, oven-sh/setup-bun opted to install dependencies from scratch over using GHA caching since the latter was somehow slower.

https://github.com/oven-sh/setup-bun/issues/14#issuecomment-...


This is what I came to say. We pre cache dependencies into an approved baseline image. And we cache approved and scanned dependencies locally with Nexus and Lifecycle.


They already truly run in parallel in Ruby 4.0. The overwhelming majority of contention points have been removed in the last yet.

Ruby::Box wouldn't help reducing contention further, they actually make it worse because with Ruby::Box classes and modules and an extra indirection to go though.

The one remaining contention point is indeed garbage collection. There is a plan for Ractor local GC, but it wasn''t sufficiently ready for Ruby 4.0.


I know they run truly parallel when they're doing work, but GC still stops the world, right?

Assuming you mean "because with Ruby::Box classes and modules have an extra indirection to go though." in the second paragraph, I don't understand why that would be necessary. Can't you just have completely separate boxes with their own copies of all classes etc, or does that use too much memory? (Maybe some COW scheme might work, doodling project for the holidays acquired haha)

Anyway, very cool work and I hope it keeps improving! Thanks for 4.0 byroot!


> GC still stops the world, right?

Yes, Ractor local GC is the one feature that didn't make it into 4.0.

> Can't you just have completely separate boxes with their own copies of all classes etc, or does that use too much memory?

Ruby::Box is kinda complicated, and still need a lot of work, so it's unclear how the final implementation will be. Right now there is no CoW or any type of sharing for most classes, except for core classes.

Core classes are the same object (pointer) across all boxes, however they have a constant and method table for each box.

But overall what I meant to say is that Box wouldn't make GC any easier for Ractors.


> It seems Ractor is still work in progress

The Ractor experimental status could almost be removed. They no longer have known bugs, and only one noticeable performance issue left (missing Ractor local GC).

But the API was just recently changed, so I think it's better to wait another years.

> I vaguely remember reading Shopify is using Fiber / Rack / Async in their codebase.

Barely. There is indeed this management obsession for fibers even when it doesn't make sense, so there is some token usage there and there, but that's it.

There is one application that was converted from Unicorn to Falcon, but falcon isn't even configured to accept concurrent requests, the gain is basically 0.

As for Rails, there isn't much use cases for fibers there, except perhaps Active Record async queries, but since most users use Postgres and PG connections are extremely costly, few people are using AR async queries with enough concurrency for fibers to make a very noticeable difference.


I just search for the tweet again and it state [1] "Falcon is now serving most of Shopify storefront traffic: web and API.", or is that an out of context quote?

[1] https://x.com/igrigorik/status/1976426479333540165


I know that tweet, it's real, but that doesn't contradict my comment.

They indeed replaced Unicorn by Falcon in one application, but falcon is configured in "Unicorn mode" (no concurrent requests). So the gain is effectively 0.

Also note how they don't share any performance metrics, contrary to https://railsatscale.com/2023-10-23-pitchfork-impact-on-shop...


Thanks. That makes more sense.


It was removed because rubygems was made to be required by default so it was now useless.

The stdlib still contains `un.rb` though: https://github.com/ruby/ruby/blob/d428d086c23219090d68eb2d02...


Wow, I did not about un.rb. Looks cool though.


Ruby always had immutable (frozen) strings, so no, this never was a reason for Symbols existence.


They aren't interned frozen strings (unless they were symbols; String#intern was, and still is, an alias for String#to_sym, and String#freeze did not and does not imply String#intern or String#to_sym), and it also (even for literals) took an extra step to either freeze or intern them prior to Ruby 2.3 introducing the "# frozen_string_literal: true" file-level option (and Ruby 3.4 making it unnecessary because it is on by default.)

Amusingly, string literals interned by default in 3.4 or because of the setting in earlier >2.3 Rubies are still (despite being interned) Strings, while Strings interned with String#intern are Symbols.


> They aren't interned frozen strings

Doesn't matter. The parent claim was:

> You'd just have a broken VM if you used mutable strings for metaprogramming in Ruby

From day one it was possible to freeze strings used in metaprograming. I mean Ruby Hashes do that to strings keys.

> Ruby 3.4 making it unnecessary because it is on by default.

That's incorrect: https://byroot.github.io/ruby/performance/2025/10/28/string-...


HashWithIndifferentAccess was added because back in the day symbols were immortal, hence could not be used for request parameters.

It no longer make sense today, and any new use of it is a smell.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: