As a long time Kagi user, the thing I miss the most is Google Maps integration for search results. It's nice to search for a restaurant or an address, see results for it, and with one click open up Google Maps to see how to get there and nearby attractions. Google Maps is such a large moat for Google, especially in locations that Apple Maps (the only real alternative) has poor coverage.
Outside of that use case, I enjoy using Kagi and recommend it to most people.
This. I didn’t know it was EU only thing, but sometimes you have a map displayed in Google search results and there’s no way to actually go to Google Maps beside clicking “directions” button (and I think even this button isn’t always there).
Just recently I’ve created a bang in Kagi which redirects me to Google Maps roughly around my home with a query that I typed.
Yes, kagi has a Google maps link built in but it doesn't integrate very smoothly. It ends up linking to strange results in Google maps. I would almost prefer that kagi just integrated Google maps until the kagi maps product is mature. It's my only stumbling block using kagi
I assume it’s region specific. There used to be alternatives in my area, but they’ve all died, and even with all the fake Google reviews, it’s the only way to get an idea about restaurants.
Here is your comment should you wish to copy it and try again:
I never run into a business that doesn't exist on Google Maps and I rarely run into incorrect hours.
I constantly can't find businesses that I'm looking for on OSM and they never have correct hours.
I use OSM for many directions, after looking it up in Google Maps and copying the plus code because it doesn't come up in OSM reliably.
In my opinion, no it's too shit compared to Google Maps, Apple Maps is also not great. Kagi have their own maps, which it seems is based on Apple Maps. Apple has no information outside the US (heard its better in the US but I just know it's not great in Europe or Africa). Things like operating hours.
FWIW I found Google Maps to be terrible on that front in Japan. Posted operating hours seemed to have no particular relation to whether a restaurant would actually be open or not.
Or just set up a browser search keyword/engine to go straight to Google Maps if that is what you want. I have Kagi as my default but have a small handful of keyword bookmarks set up for when I am making something that isn't a general web search. "m <location>" for Google Maps, "i <title>" for IMDB, "p <query>" for Kagi image search, "d <query>" for D&D Rules Search, you get the idea.
This way Kagi doesn't even see my query, I don't need to wait for the redirect, I get to set up the shortcuts myself and I can switch any of my search providers (even the default) without affecting my "bangs".
Consider what companies are launching these days and the talent they need to do it. It's AI companies, it's hardware, it's hard science. It's fewer Uber for pet grooming. There's less appetite for investing in the latter companies and _normal_ software developers are less useful for the former. Yes, ZIRP has contributed, but there are wider economic and social issues that are above my pay grade at play.
The safest space continues to be distributed systems and systems programming in general IMO. You'll still find work at hyperscalers. You'll still find work in the new spaces. Until AI can operate these systems, there'll be a spot for us bit janitors for a little while longer.
> Seize or terminate their patents and copyrights. Issue arrest warrants for criminal evasion. Compulsory licensing of x86 to a European design firm immunized by EU law.
My eyes rolled so far back I hurt myself.
Please provide some examples of where the EU has been able to do a fraction of what you listed to large, US based firms in the past.
Looking at the future, if you want a trade war and an excuse for the new US administration to completely neglect NATO obligations this is a great start.
> Because the harsh truth is that none of those things are actually big issues that would justify learning slightly different syntax.
That's partially the cost, but the other cost is building this into existing tool chains and deployment mechanisms. Getting buyin from teams, ensuring _everyone_ learns the syntax.
And the unstated fear: The code it generates, is it actually good? Am I going to have silly issues down the road that are hard to debug and require diving into generated code to see some concurrency issue?
Thank you for sharing your perspective. I genuinely appreciate honest feedback. My goal is always to add value to discussions, but it seems I’ve fallen short in this instance. If there’s a specific way I could clarify or improve my comments, I’d be grateful to hear it.
Regarding my company, I respect your decision, but I hope that if our paths cross again, I might have the opportunity to change your mind through actions that demonstrate the value we provide to our customers.
It’s insane the excuses being made here for Netflix’s apparently unique circumstances.
They failed. Full stop. There is no valid technical reason they couldn’t have had a smooth experience. There are numerous people with experience building these systems they could have hired and listened to. It isn’t a novel problem.
Here are the other companies that are peers that livestream just fine, ignoring traditional broadcasters:
- Google (YouTube live), millions of concurrent viewers
- Amazon (Thursday Night Football, Twitch), millions of concurrent viewers
- Apple (MLS)
NBC live streamed the Olympics in the US for tens of millions.
As a cofounder of a CDN company that pushed a lot of traffic, the problem with live streaming is that you need to propagate peak viewership trough a loooot of different providers. The peering/connectivity deals are usually not structured for peak capacity that is many times over the normal 95th percentile. You can provision more connectivity, but you don't know how many will want to see the event.
Also, live events can be trickier than stored files, because you can't offload to the edges beforehand to warm up the caches.
So Netflix had 2 factors outside of their control
- unknown viewership
- unknown peak capacities outside their own networks
Both are solvable, but if you serve "saved" content you optimize for different use case than live streaming.
I don't disagree that Netflix could have / should have done better. But everybody screws these things up. Even broadcast TV screws these things up.
Live events are difficult.
I'll also add on, that the other things you've listed are generally multiple simultaneous events; when 100M people are watching the same thing at the same time, they all need a lot more bitrate at the same time when there's a smoke effect as Tyson is walking into the ring; so it gets mushy for everyone. IMHO, someone on the event production staff should have an eye for what effects won't compress well and try to steer away from those, but that might not be realistic.
I did get an audio dropout at that point that didn't self correct, which is definitely a should have done better.
I also had a couple of frames of block color content here and there in the penultimate bout. I've seen this kind of stuff on lots of hockey broadcasts (streams or ota), and I wish it wouldn't happen... I didn't notice anything like that in the main event though.
Experience would likely be worse if there were significant bandwidth constraints between Netflix and your player, of course. I'd love to see a report from Netflix about what they noticed / what they did to try to avoid those, but there's a lot outside Netflix's control there.
Disney HotStar managed to stream ~60M livestreams for the Cricket world cup a year ago. The problem has been solved. Livestreaming sports just have a different QoS expectations than on demand.
I wouldn't say it's a solved problem, how many other companies are pulling off those numbers? Isn't that the current record for concurrent streams? And wasn't it mostly to mobile devices?
The size of engineering head count is not informative, it really depends on how much is in-house and how much is external for Hotstar that would be i.e parent Disney or before Fox or staffing from IT consulting organizations who will not be on payroll.
For what it is worth, all things being equal there would be lot more non engineering in Hotstar for 2000 employees versus a streaming company of similar size or scale of users. Hotstar operates in challenging and fragmented market, India has 10+ major languages(and corresponding TV, music and movie markets) Technically there is not much difference to what Netflix or Disney has to do for i18n, however operationally each market needs separate sales, distribution and operations.
---
P.S. Yes Netflix operates in more markets including India than anybody else, however if you are actually using Netflix for almost any non English content, you will know how weak their library and depth in other markets are, their usual model in most of these markets is to have few big high quality(for that market) content rather than build depth.
P.P.S. Also yes, Indian market is seeing consolidation in the sense that many releases on streaming are multiple lingual and use major stars from more than one language to draw talent ( not new, but growing in popularity as distribution becomes cheaper with streaming), however this is only seen in big banner productions as tastes are quite different in each market and can't scale for all run of the mill content.
Amazon had their fair share of livestream failures and for notably less viewers. I don't think they deserve a spot on that list. I briefly worked in streaming media for sports and while it's not a novel problem, there are so many moving parts and points of failure that it can easily all go badly.
There is no one "Amazon" here, there are at least 3:
* Twitch: Essentially invented live streaming. Fantastic.
* Amazon Interactive Video Service [0]: Essentially "Twitch As A Service", built by Twitch engineers. Fantastic.
* Prime Video. Same exact situation as Netflix: original expertise is all in static content. Lots of growing pains with live video and poor reports. But they've figured it out: now there are regular live streams (NHL and NFL), and other channel providers do live streaming on Prime Video as a distribution platform.
Doesn't twitch almost fall over (other non-massive streams impacted) when anyone gets close to 4-5m concurrent viewers? I remember last time it happened everything started falling over, even for smaller streams. Even if Netflix struggled with the event, streaming other content worked just fine for me.
It's not full stop. There are reasons why they failed, and for many it's useful and entertaining to dissect them. This is not "making excuses" and does not get in the way of you, apparently, prioritizing making a moral judgment.
Or, hear me out here, it's a wild concept, just work.
You know, like every other broadcaster, streaming platform, and company that does live content has been able to do.
Acting like this is a novel, hard problem that needs to be solved and we need to "upsell" it in tiers because Netflix is incompetent and live broadcasting hasn't been around for 80+ years is so fucking stupid.
> I've found that using the websockets extension really helps with automatically keeping the frontend in sync.
I choked reading this imagining people thinking they're doing something simple (as in not complex) by introducing websockets so they can keep state in their Go backend and sync it with their front-end, ya know, to keep track of the # of TODOs checked.
But it is simple :) The underlying technology might be more complex, but the library is solid, it's trivial to update any part of the page once you have the libraries set up, and you don't need to write any javascript. Works for me!
I think you may mean easy. It may be _easy_, but it's not simple. There are so many more moving pieces, failure modes, operational issues now to consider. Websocket connections aren't free.
I guess I just have to disagree. My experience is that it is robust, and removes an entire category of problems that appear when your state is spread across the back- and frontend.
As someone else mentioned, SSE is a somewhat simpler protocol that achieves the same purpose. Same idea though.
If you mean SSE, then yes that would work just as well (unless you need the bidirectionality for the client to modify some aspect of the connection after the page has loaded). There is an htmx-sse extension too.
I'm not sure how XHR alone would let you automatically get backend state changes reflected to the frontend. You can poll one or more endpoint, but that's more complicated to get right and less efficient.
there's nothing to sync, the state is only in the backend. if you tell the user that the TODO is checked and you're only keeping track of it in the frontend and you don't sync it to the server and it's lost in the meantime, your UI lied to the user when they checked it and it showed as checked. With state on the backend, the user doesn't see that their data is saved until, you know, it actually is. And if all the state is rendered from the backend it can't get out of sync with the display
I hate it when a UI tells me I did an action when really there's an asynchronous background task happening that may fail
“One feature that IMHO would be game changer for tools like this (and are lacking even in paid services like Hatchbox.io, which is overall great) is streaming replication of databases.”
And then I mentioned that I believe fly.io has litestream support. I think it’s fairly relevant to the comment/thread.
Outside of that use case, I enjoy using Kagi and recommend it to most people.