I've been using Math Academy for a couple of months now seriously (~3.5k XP). Initially I was reviewing topics / concepts that I already had an intuitive understanding of from college courses, and I believe for those it has helped take me to the next level in that I now can, 99% of the time, solve problems in those topics. As I start to approach topics that I haven't fully covered in college or have some intuitive understanding for I do find that I can mechanically carry out the operations, but I lack an understanding of "why" I'm doing this.
I think the author of this makes some good points to this effect. I believe that the best approach is a bit inverse of what the author is saying however. I think that Math Academy should be the base, where 80-90% of your time is spent. The remaining 10-20% should be used to supplement this. Either with lectures, textbook / edutainment reading or other ways to develop more of that intuition.
Can this open multiple tabs / navigate to different domains? When booking a restaurant I might want to confirm what the prices are on the menu or check google maps for the reviews / location.
I'm going to give this a go, I've been self studying linear algebra, and there are certain questions that I can't seem to find good answers or explanations to online that ChatGPT or Claude cannot answer sufficiently. The one concern I have with this, is the async nature of it. One of the reasons I like using ChatGPT or Claude (or a real human if I can find one) to learn is because I can ask clarifying questions and try to relate what they're saying in real time to another concept I'm familiar with. Curious to try it out but I think that an important part of making the videos good is that the student / learning needs to provide a good prompt so that the response can be customized to directly address their area of confusion.
Ah I see why you'd want realtime interaction. In earlier versions there used to be voice chat, so when the learner-and-teacher happen to both be free, you'd talk in real-time. It was removed to focus on the async video aspect, but hearing what you said puts that back in my radar...
When you mention "good prompts", is that in the context of AI, or with asking questions in general? Is it easy/hard to create good prompts from your past experience?
As for Linear Algebra, I noticed you didn't sign up - did something change your mind? Feel free to be brutally honest as it helps me make real progress! If it's easier, feel free to call +1 503 250 3868 (Tokyo time zone), or email eltonlin@alum.mit.edu anytime!
Appreciate you following up! I plan to sign up over the weekend which is normally when I do my longer blocks of studying and have more questions. I think that what I mean is that sometimes if I ask for an explanation about a topic or question, I don't get the magical answer that answers my question after my initial prompt. That is to say, I typically need to ask 5-6 clarifying questions to an LLM to get the explanation that makes it click for me. This can be by asking for more examples, asking it to clarify certain points, fitting an example to another topic I know, etc... I think that while I was talking about it in the context of LLM tutors, the same logic also applies for human tutors as well. It's very hard for me to precisely formulate a question in one-shot that will get the right answer as you don't know what you don't know. And the reason I'm asking LLMs or a human in general is because I can't find a good explanation on the internet as it's typically something very niche or precise. Which is why I think it's good to have a back and forth.
Ah, so since you usually need to ask 5-6 clarifying questions, you'd rather it be done in short bursts synchronously!
Right now with explanations.app, you can only follow-up with comments OR by posting new questions - asynchronously. I see that this might be a problem.
I look forward to you trying it this weekend :^) so I can also hear about your usage experience before deciding on the best way to solve this sync./async. trade-off
(perhaps the naive solution is voice chat - you post a question, your teacher happens to be free, you happen to be free, you just talk quickly back & forth)
Hi - math student here too. Have you asked any image AIs for graphical representations of these Linear Algebra problems? How did those attempts go?
Asking because visualizing the problem scenarios was something that did not come right away for me, and I had to spend a lot of time in the tutoring center to build it. I believe the right prompt might yield the right thing.
There's an interesting stat that I like to look at in Football called xG. It's essentially a metric of expected goals based on all the shots that have been taken, and factors in things like where the shot was taken etc... Yet I've always thought that it was an imperfect metric as it didn't take into account things like opportunity cost of shooting when you could have passed to someone with a higher xG, or even the position of all the defenders / keeper when you're shooting. It would be cool if we could use a system like this to better understand the xG based on all of those factors and more. Basically, for any given position where we take the entire field and players even, into account, what is the actual xG.
I wonder what implications this has on distributing open source models and then letting people fine tune it. Could you theoretically slip in a "backdoor" that lets you then get certain outputs back?
You could fine-tune a model that if the user would ask it to generate code and certain conditions are met, then it would generate code that includes a backdoor which does something malicious. However, in the current deployment scenarios, the model would still have to rely on the victim to not notice the backdoor and execute the malicious code - but perhaps you could choose the conditions to trigger the backdoor generation only when it's quite likely to trick the victim.
(I'm assuming that the actual code running the model is clean, because if it's not, then you don't need to involve ML models at all)
edit: or do some fancy MITM thing on wherever you host the data. some random person on the interwebs? give them clean data. our GPU training servers? modify these specific training examples during the download.
edit2: in case it's not clear from ^ ... it depends on the threat model. "can it be done in this specific scenario". my initial comment's threat model has code is public, data is not. second threat model has code + data are public, but training servers are not.
model reverse engineering is a pretty cool research area, and one big area of it is figuring out the training sets :) this has been useful for detecting when modelers include benchmark eval sets in their training data (!), but can also be used to inform data poisoning attacks
I wonder how easy / hard and fast it will be to port over quest applications to the Vision Pro. Also wonder how hackable it will be. I'm guessing if it's anything like the iPhone / iPad it won't really be.
>Also wonder how hackable it will be. I'm guessing if it's anything like the iPhone / iPad it won't really be.
Fortunately visonOS is a macOS fork, not iOS. The idea for Vision Pro is to be a fully fledged computer replacement, not a secondary mobile device like iPad/iPhone.
I really like this idea. Reading other peoples code is one of the best ways to learn to code in my opinion.
Yet at the same time, a part of learning / understanding from reading other peoples code (given it's of good quality) is that you have to manually recreate what they're doing in your head without the comments.
I think for educational purposes the "educational comments" that you add have to hit a balance of explaining what the code does, without just explaining away an entire function. To that end, it might be better to add more comments in regards to specific lines, while letting the learner put the pieces of the different lines together themselves.
An extension that I'd like to see of this is many different types of codebases commented like this (OS, React, DBMS, etc etc) with additional information and diagrams about the overall design of the program and its different parts. Looking at and understanding a single file is all well and good, but without the context of how it fits into the bigger picture a lot of the potential learning is lost.
I think the author of this makes some good points to this effect. I believe that the best approach is a bit inverse of what the author is saying however. I think that Math Academy should be the base, where 80-90% of your time is spent. The remaining 10-20% should be used to supplement this. Either with lectures, textbook / edutainment reading or other ways to develop more of that intuition.