Yes, as someone who is usually flying with my GF, I love this feature! Unfortunately air canada's implementation is abysmal and anytime there is a pilot announcement it interrupts the game long enough to break the network connection and cause it to end the game.
Among endurance athletes collagen supplements have become increasingly popular the past couple years -- from what I understand the evidence is kind of mixed though
I don't get what's the supposed mechanism of action here. Collagen is a hard to digest protein and it has to get digested to be processed and then it's no longer collagen. Why not just eat any other protein source instead?
Yes, that seems to sort of be the criticism and mixed results. Although not everyone has a complete protein diet so theoretically although it breaks down the idea is you then have all the things you need, should your body choose to use it to build collagen.
But I agree, I'd rather start solving deficiencies at the diet level than the supplement level and haven't integrated collagen personally so far.
TBH I suspect marketing plays a big role. "Collagen = good, therefore just buy it and eat it" makes logical sense if you don't actually do any research first.
Likely yes, but I've had enough chicken breast for life during my fit 20s. I just don't feel like stuffing myself with tasteless sources of protein anymore and testy ones (burgers, steaks, grilled salmon, etc.) will cause unwanted side effects and risks when consumed in high amounts.
Even bodybuilders and powerlifters admit that 2g protein per kg of body weight is about all you need. You can get that with a normal diet and a couple of protein shakes, which taste fine if you use milk and half a banana. You don't need to eat a whole chicken breast for every meal.
I made spaghetti bolognese last night and it had 60 grams of protein per 800 kcal serve. Admittedly I used lean kangaroo mince, because I'm Australian and it was on sale. Still: three meals like that and you wouldn't even need a protein shake.
Maybe, maybe not. It would depend on a variety of factors including the activities you do, your age, etc. Maybe athletes need more collagen compared to people who don’t exercise, etc, etc.
Also complete protein sources are definitely not easy to get. Good luck if you have dietary constraints.
I just wish whey was as easy to get unflavored. Let me handle the flavoring, you provide the protein.
Yes, I'm aware you can buy unflavored whey protein, but it's more expensive and you have to order it online. I can get a huge tub of "Delicious quadruple chocolate delight" BS from Costco for comparative pennies.
You need to eat a ton of quinoa. Only the most dedicated vegan bodybuilder will eat that much quinoa. No thanks.
Also, only animal sources contain hydroxyproline amino acid in significant amounts, which you pretty much only get from collagen sources.
So while quinoa and other like even whey might advertise themselves as complete protein sources, no, they do not contain all the amino acids humans can use in significant amounts.
Hydroxyproline isn't essential though, humans can produce their need from collagen, which they also produce as well as most other animals.
> You need to eat a ton of quinoa
Most don't: at 4.4% protein, a 65kg man like me needs 1.5kg of cooked quinoa per day and it's not a big deal:
- You'll digest it like a king: quinoa is full of soluble and insoluble fibers and you won't feel puffy for eating too much. Easy in, easy out.
- Like milk or wheat, there's many transformations possible like flour, flakes, marinades, beverages, soups... alway a joy to cook and eat, no boredom with that grain.
Can't speak for the bodybuilders though but I'm sure most manage they nutrition. I think soy/pea is more popular.
By the way, very few eats only quinoa or any other single aliment. They also get amino acids from grains, pulses and seeds... even fruits like tomatoes but it's obviously negligible.
Quinoa is also fulled with minerals, vitamins and it's proteins have the same biological value (BV) as beef - or more depending on the source.
> its high-quality protein, complete set of amino-acids, and high content of minerals and vitamins. [0]
> exceptional balance between oil, protein and fat [1]
> Quinoa has a high biological value (73%), similar to that of beef (74%) [2]
There are plenty of studies showing that collagen supplementation helps athletes. Which means there are cases where the body doesn’t produce enough collagen for itself. And as you age, your body produces less collagen. Reasons enough to supplement with collagen.
Something like this can be reworded to make it sound like a “good” thing: “mission-centricity” or “dedication” or some other label that spins this as being committed to the company (leaving out that this is at their own expense).
I wonder how air Canada reconciles this. There was a popular globe and mail article a while ago that gave awful rankings to air Canada's water tanks -- so the company put up signs in the bathroom saying the water is non-potable and called it a day.
Not super comforting if they're then using the same 'non-potable' water to make coffee...
Is there any reason to expect there would be "toxins", given that it's just water? I can imagine how there might be accumulated toxins it's a pack of chicken breasts left in a hot car for 8 hours, but if it's water it should be fine? After all, boiling water is a tried and true way of making water safe to drink.
Yes, there are substances that slip through, but it works well enough for most cases that it's probably fine. Otherwise you get into weird edge cases like "what if there are prions in the water?!?" or whatever.
Heavy metals are a big problem, especially from cheap brass fittings common in outdoor water hoses.
Indoor plumbing, by contrast, uses copper and/or plex tubing and so there’s near zero risk of metal poisoning (caveat on cheap plex fittings- don’t do that.)
I was recently talking to a colleague I went to school with and they said the same thing, but for a different reason. We both did grad studies with a focus on ML, and at the time ML as a field seemed to be moving so fast. There was a lot of excitement around AI again finally after the 'AI winter'. It was easy to participate in bringing something new to the field, and there was so many unique and interesting models coming about every day. There was genuine discussion about a viable path to AGI.
Now, basically every new "AI" feature feels like a hack on top of yet another LLM. And sure the LLMs seem to keep getting marginally better, but the only people with the resources to actually work on new ones anymore are large corporate labs that hide their results behind corporate facades and give us mere mortals an API at best. The days of coding a unique ML algorithm for a domain specific problem are pretty much gone -- the only thing people pay attention to is shoving your domain specific problem into an LLM-shaped box. Even the original "AI godfathers" seem mostly disinterested in LLMs these days, and most people in ML seem dubious that simply scaling up LLMs more and more will be a likely path to AGI.
It seems like there's more excitement around AI for the average person, which is probably a good thing I suppose, but for a lot of people that were into the field they're not really that fun anymore.
In terms of programming, I think they can be pretty fun for side projects. The sort of thing you wouldn't have had time to do otherwise. For the sort of thing you know you need to do anyway and need to do well, I notice that senior engineers spend more time babysitting them than benefitting from them. LLMs are good at the mechanics of code and struggle with the architecture / design / big picture. Seniors don't really think much about the mechanics of code, it's almost second nature, so they don't seem to benefit as much there. Juniors seem to get a lot more benefit because the mechanics of the code can be a struggle for them.
> Now, basically every new "AI" feature feels like a hack on top of yet another LLM.
LLM user here with no experience of ML besides fine-tuning existing models for image classification.
What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?
Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?
Well one of the inherent issues is assuming that text is the optimal modality for every thing we try to use an LLM for. LLMs are statistical engines designed to predict the most likely next token in a sequence of words. Any 'understanding' they do is ultimately incidental to that goal and once you look at them that way a lot of the shortcomings we see become more intuitive.
There's a lot of problems LLMs are really useful for because generating text is what you want to do. But there's tons of problems which we would want some sort of intelligent, learning behaviour that do not map to language at all. There's also a lot of problems that can "sort of" be mapped to a language problem but make pretty extraneous use of resources compared to a (existing or potential) domain specific solution. For purposes of AGI, you could argue that trying to express "general intelligence" via language alone is fundamentally flawed altogether -- although that quickly becomes a debate about what actually counts as intelligence.
I pay less attention to this space lately so I'm probably not the most informed. Everyone seems so hyped about LLMs that I feel like a lot of other progress gets buried, but I'm sure it's happening. There's some problem domains that are obviously solved better with other paradigms currently: self-driving tech, recommendation systems, robotics, game AIs, etc. Some of the exciting stuff that can likely solve some problems better in the future is some of the work on world models, graph neural nets, multi modality, reinforcement learning, alternatives to gradient descent, etc. I think it's a debate whether or not LLMs are a local maxima but many of the leading AI researchers seem to think so -- Yann Lecun recently for e.g. said LLMs 'are not a path to human-level AI'
It’s now moving faster than ever. Huge strides have been made in interpretability, multi modality, and especially the theoretical understanding of how training interacts with high dimensional spaces. E.g.: https://transformer-circuits.pub/2022/toy_model/index.html
Thanks, this seems interesting. I'll give it a read. I admittedly don't keep tabs as much as I should these days. I feel like every piece of AI news is about LLMs. I suppose I should know other people are still doing interesting things :)
I often do this in meetings and have gotten into the habit of saying "I'm thinking". It's not much but it gives both of us time to think and explicitly makes it clear I don't expect the person to say something. I think that helps.
Fair enough, I do like parent’s a bit better, “blurting processing” feels like a too high default setting right after seeing “I’m thinking” :) - not that any of it matters anyways, communicating _something_ gets you there. Rest it just triaging around the edges what people will call you weird for, and if they are, they were going to anyway.
"Give me a second" is something I say when someone just has to break the silence with some unproductive comment. Having 20-30 seconds to think silently should be a completely normal thing.
A whole other part of this argument that could be made is about the inherent assumption that a ping timeout is caused by an event that only affects one machine.
I'll admit to sending a couple of the messages that made Linksys routers restart. I also set up automatic k-lines on Snoonet for these very strings, years ago
There are events that may affect more than one machine which are not netsplits.
e.g. an ISP with common users experiences an outage, an IRC client with common users has a bug, common users within the same time zone have automated system updates run at the same time, the IRC server experiences an upstream network disruption affecting only some routes, a regional power outage occurs, a hosted bouncer service with common users has an outage, etc, etc, etc...
What's the right amount of standards to have when you're writing 9 million lines of code that controls a 30,000lb machine moving through the sky at mach 1 with a human life inside?
Player compatibility. Netflix can use AV1 and send it to the devices that support it while sending H265 to those that don't. A release group puts out AV1 and a good chunk of users start avoiding their releases because they can't figure out why it doesn't play (or plays poorly).
OK this would obviously be bad, I think everyone gets that.
But the note in the article is getting at something that feels interesting. I think there's a more fruitful conversation around "how might this work in spirit?" instead of "would this work literally?"
reply