Hacker Newsnew | past | comments | ask | show | jobs | submit | themeiguoren's commentslogin

I ran my power bill for a small single family home through chatGPT and it was interesting. Cold winters/hot summers, electric stove, air conditioning during summers, and nothing else out of the ordinary that uses power.

- Base electricity: 17 kWh/day (10 in months without AC)

- Heating (currently gas): 33 kWh/day

- Heating (if I switched to heat pump with COP 3): 10 kWh/day

- EV charging at 10k miles/yr: 9 kWh/day

Total if I was fully electrified: 36 kWh/day, or 13 MWh/yr


I completely agree. Even if I was able to understand the story and appreciate the prose in middle school, I can see looking back that I lacked the life experience to appreciate a lot of the undertones and unspoken themes.

I distinctly remember being completely bewildered when we read "Hills Like White Elephants" [1] and our teacher told us it was about an abortion, and ultimately about commitment and relationships and the ungraspable decision points that define a life. I remember rereading the text, not finding those words anywhere, and being confused about how a man and woman having a halting conversation at a train stop might have possibly given her that takeaway. But now of course it's achingly obvious.

Jane Austen similarly passed me by in high school. I needed to understand women a lot better before Pride & Prejudice started to make sense.

Even still when I read the classics, there are some where I can appreciate the themes but which are too abstract to me for them to resonate. The difference from when I was young is that now I can tell that there's more story waiting to be told once I've lived more life. Maybe in another 20 years.

[1] https://jerrywbrown.com/wp-content/uploads/2020/02/Hills-Lik...


Of the things matlab has going for it, looking just like the math is pretty far down the list. Numpy is a bit more verbose but still 1-to-1 with the whiteboard. The last big pain point was solved (https://peps.python.org/pep-0465/) with the dedicated matmul operator in python 3.5.

Real advantages of matlab:

* Simulink

* Autocoding straight to embedded

* Reproducible & easily versioned environment

* Single-source dependency easier to get security to sign off on

* Plotting still better than anything else

Big disadvantages of matlab:

* Cost

* Lock-in

* Bad namespaces

* Bad typing

* 1-indexing

* Small package ecosystem

* Low interoperability & support in 3rd party toolchains


> Big disadvantages of matlab:

I will add to that:

* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors.

Ironically, the snippet in the article shows that MATLAB has forced them into this awkward mindset; as soon as they get a 1d vector they feel the need to artificially make it into a 2d column. (BTW (Y @ X)[:,np.newaxis] would be more idiomatic for that than Y @ X.reshape(3, 1) but I acknowledge it's not exactly compact.)

They cleverly chose column concatenation as the last operation, hardly the most common matrix operation, to make it seem like it's very natural to want to choose row or column vectors. In my experience, writing matrix maths in numpy is much easier thanks to not having to make this arbitrary distinction. "It's this 1D array a row or a column?" is just over less thing to worry about in numpy. And I learned MATLAB first, do I don't think I'm saying that just because it's what I'm used to.


* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors.

I despise Matlab, but I don't think this is a valid criticism at all. It simply isn't possible to do serious math with vectors that are ambiguously column vs. row, and this is in fact a constant annoyance with NumPy that one has to solve by checking the docs and/or running test lines on a REPL or in a debugger. The fact that you have developed arcane invocations of "[:,np.newaxis]" and regular .reshape calls I think is a clear indication that the NumPy approach is basically bad in this domain.

You do actually need to make a decision on how to handle 0 or 1-dimensional vectors, and I do not think that NumPy (or PyTorch, or TensorFlow, or any Python lib I've encountered) is particularly consistent about this, unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana, followed by subsequent .reshape calls to avoid these issues. As much as I hated Matlab, this shaping issue was not one I ran into as immediately as I did with NumPy and Python Tensor libs.

EDIT: This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why. And, frankly, if you have gone through proper math texts, they are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly. It's not that you can't figure it out from context, it is that having to figure it out and check seriously damages fluent reading and wastes a huge amount of time and mental resources, and terrible shaping documentation and consistency is a major sore point for almost all popular Python tensor and array libraries.


> It simply isn't possible to do serious math with vectors that are ambiguously column vs. row ... if you have gone through proper math texts

(There is unhelpful subtext here that I can't possibly have done serious math, but putting that aside...) On the contrary, most actual linear algebra is easier when you have real 1D arrays. Compare an inner product form in Matlab:

   x' * A * y
vs numpy:

   x @ A @ y
OK, that saving of one character isn't life changing, but the point is that you don't need to form row and column vectors first (x[None,:] @ A @ y[:,None] - which BTW would give you a 1x1 matrix rather than the 0D scalar you actually want). You can just shed that extra layer of complexity from your mind (and your formulae). It's actually Matlab where you have to worry more - what if x and y were passed in as row vectors? They probably won't be but it's a non-issue in numpy.

> math texts ... are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly.

That's because they use the blunt tool of matrix multiplication for composing their tensors. If they had an equivalent of the @ operator then there would be no need, as in the above formula. (It does mean that, conversely, numpy needs a special notation for the outer product, whereas if you only ever use matrix multiplication and column vectors then you can do x * y', but I don't think that's a big deal.)

> This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why.

I don't often use scikit-learn but I tried to look for 1D/2D agreement issues in the source as you suggested. I found a couple, and maybe they weren't representative, but they were for functions that could operate on a single 1D vector or could be passed as a 2D numpy array but, philosophically, with a meaning more like "list of vectors to operate on in parallel" rather than an actual matrix. So if you only care about 1d arrays then you can just pass it in (there's a np.newaxis in the implementation, but you as the user don't need to care). If you do want to take advantage of passing multiple vectors then, yes, you would need to care about whether those are treated column-wise or row-wise but that's no different from having to check the same thing in Matlab.

Notably, this fuss is precisely not because you're doing "real linear algebra" - again, those formulae are (usually) easiest with real 1D arrays. It when you want to do software-ish things, like vectorise operations as part of a library function, that you might start to worry about axes.

> unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana

You shouldn't have to call .ravel or .flatten if you want a 1D array - you should already have one! Unless you needlessly went to the extra effort of turning it into a 2D row/column vector. (Or unless you want to flatten an actual multidimensional array to 1D, which does happen; but that's the same as doing A(:) in Matlab.)

Writing foo[:, None] vs foo[None, :] is no different from deciding whether to make a column or row vector (respectively) in MATLAB. I will admit it's a bit harder to remember - I can never remember which index is which (but I also couldn't remember without checking back when I used Matlab either). But the numpy notation is just a special case of a more general and flexible indexing system (e.g. it works for higher dimensions too). Plus, as I've said, you should rarely need it in practice.


x @ A @ y is supposed to be a dot product and you are saying this is better notation?? Row and column vectors have actual meaning. Sorry but I am not reading the rest of whatever you wrote after that. Not the GP but you should just consider the unhelpful subtext to be true.


> Autocoding straight to embedded

I used this twenty-something years ago. It worked, but I would not have wanted to use it for anything serious. Admittedly, at the time, C on embedded platforms was a truly awful experience, but the C (and Rust, etc) toolchain situation is massively improved these days.

> Plotting still better than anything else

Is it? IIRC one could fairly easily get a plot displayed on a screen, but if you wanted nice vector output suitable for use in a PDF, the experience was not enjoyable.


A bit off topic from the technical discussion but does anyone recognize what blog layout or engine this is? I really like the layout with sidenotes and navigation.


Seems like a Thufte inspired style, something like this: https://clayh53.github.io/tufte-jekyll/articles/20/tufte-sty...


"The Iron Snow Beneath Your Feet"

What a beautiful, poetic article. I learned something new, and saw the majesty of Earth's natural systems in a new light.


This says more about the link budget than anything else, it's much harder to keep tracking when satellites are close to each other moving at high relative velocities. At the distances in your example, movement of the laser link optical head is very slow, on the order of 0.01 - 0.1 deg/s. Optical heads also have a control loop which actively corrects for pointing errors once a positive link is established. Check out: https://www.sda.mil/wp-content/uploads/2022/04/SDA-OCT-Stand...


Not quite. The spokesman is a talking about controlled deorbit, where propulsion is used to actively lower altitude rather than coasting down due to atmospheric drag. This is in contrast to controlled reentry, which targets an ellipse on the ground where any debris would fall. The latter requires either much more thrust than their electric thrusters have, or a much steeper reentry angle than Starlink's circular orbits.

Starlink satellites are pretty well aerodynamically balanced when in their "ducked" orientation, but are not going to be able to overcome aerodynamic torques below 200 km or so, meaning they will be unable to point their thrusters in target directions. At that point, there are still 1-2 days before reentry will occur. Hour-to-hour variability in tropospheric atmospheric density due to solar flux levels and geomagnetic activity means that the precise reentry time will be unpredictable to within a few hours (which equates to anywhere along the ground track of a few orbits).


I found the book interesting from a historical perspective, but doesn't have any "secret" information or anything to add above the far more extensive resources available online today. I find http://data-to-viz.com excellent for a high-level look at how to match a chart to your data and story for that data, and the different plotting library examples can be great references for inspiration, eg https://matplotlib.org/stable/gallery/index.html


But if you want that, you need actual control. A voting vs non voting shares split.


I thought I had a pretty good grasp on this, but the idea that an infinite sum of higher order moments uniquely defines a distribution in a way analogous to a Taylor series, was new and super interesting! It gives credence to the shorthand that the lower order moments (mean, variance, etc) are the most important properties of a distribution to capture, and is how you should approximate an unknown distribution given limited parameters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: