Hacker Newsnew | past | comments | ask | show | jobs | submit | adamisntdead's commentslogin

I wrote something similar a couple of years ago - https://github.com/adamisntdead/QuSimPy

Happy to answer any questions people have, including on other simulation methods other than state vector!


One of the aspects emphasized in TFA is being able to simulate gates of any dimension/number of qubits (like a 3-qubit Toffoli or a 5-qubit Molmer-Sorensen), instead of just 1- and 2-qubit gates. Have you thought about extending your simulator to support gates of greater than two qubits?


I guess back when I wrote it I didn't see the need given that single qubit gates along with CNOT are universal and they're implemented. I think airing on the side of 'as few features as is educational' was my mentality on this.


I know about the conventional 2-qubit Molmer-Sorensen but not the 5-qubit version. Could I ask what it is?


See e.g. equations (5) and (6) from [1], where any qubit subset of a system might undergo the action of the M–S gate. But definitely the usual presentation of the physics of M–S is on a two qubit system.

[1] https://arxiv.org/abs/1601.06819


I'm currently a mathematics undergraduate at Cambridge and there's quite a few students who live type notes in various different formats. I think that the majority of the learning curve comes when getting used to the format of writing in LaTeX, and once you have that down (so that typing both prose and mathematics takes little to no effort) you can type notes faster than you can write them - with the small exception of maybe matrices and certain advanced things which slow you down.

The benefit of having typed notes at least for me come from being able to search, having a good record of my own understanding of a course and also not having to rely on keeping handwritten notes safe. They also look pretty which is a bonus for studying from them.

Examples:

- Analysis I (which have been edited): https://adamkelly.me/files/ia-analysis-i/analysis-i.pdf

- Graph Theory (not edited but diagrams added): https://adamkelly.me/files/ii-graph-theory/graph-theory.pdf

As a side note, one other thing I do is write short 'handouts' on topics that I think I have something to say about. For example https://adamkelly.me/files/handouts/direct-products/direct-p....


I am always in awe of people like you. I have no idea how someone can type \frac{}{} faster than I can draw a horizontal line (while listening to a lecturer), but I believe you.

Meanwhile I use Onenote. I don't have to worry about keeping notes safe, or the stress of trying to type at warpspeed. I would go with a Remarkable tablet but now with subscription pricing it's a non starter.


Latex specific ide \frac and auto fill. You can for sure get speedy with practice!

Those beautiful graphs now… I ended up using secondary programs to gen images (pdf or whatever) of graphs and such for latex to bring in the usual way. Not as good!


My recollection is that the main speed advantage lay in being able to copy-paste the previous line when working through a derivation.

I live-LaTeXed notes for a couple of years in undergrad but eventually went back to pen + paper (approximately all my assignments continued to be typeset, though).


Great stuff! I am in awe of your beautifully done graphs.

For speed with, e.g. matrices like you say, but also more specialized course specific notation, could you build a set of quick functions for the topic of the day to aid note taking?

If set up your environment just so, you can repurpose really simple commands like the slash in “\this” to do some common but annoying thing like bold upright lettering or underbars or something. (I actually need to go look at latex to be sure what all you could make maximally parsimonious… it’s been a while and I mostly copy and paste my old commands around. Anyway, you probably already do this kind of thing.


If we're talking about Fun maths, there's a couple of really good channels that look at Olympiad problems.

- vEnhance (https://www.youtube.com/c/vEnhance), an MIT student and IMO Gold Medalist who solved problems live on stream (and plays various games)

- Osman Nal (https://www.youtube.com/c/OsmanNal), looking at problems ranging from AMC/AIME to IMO 3/6s

- Michael Greenberg (https://www.youtube.com/channel/UC3mhbGC7kQgzkXT9fceNOwA), a geometry enthusiast who also live solves problems, though the production quality isn't amazing

And of course, Michael Penn (but that's in the post above)!


I'm a big fan of this video from Reducible on Huffman codes: https://www.youtube.com/watch?v=B3y0RsVCyrw, and from 3Blue1Brown on Hamming codes: https://www.youtube.com/watch?v=X8jsijhllIA


Thanks. To be more specific, i'm interested in learning Shannon's information theory from scratch. So far I know just the definition of entropy.


Shannon's book, The Mathematical Theory of Communication, is approachable with the mathematical background of first-year college calculus. I don't think it requires any more math than is essential to understand the topic.


I'm surprised that something like point compression can even be patented, considering that it's relatively straightforward mathematically (all of the difficulty seems to lie in the number theory to compute a solution, which I assume wasn't invented by whoever owned the patent)...


Cryptography patents are, on some level, stupid. It's all math.


Because the pacemakers rotated, they weren't the same all of the way through.


That article had a really nice depth to it, answered all of my questions that I had yesterday when I first saw the project.


I have been thinking about this sort of thing quite a lot - while I do think it's wrong to split people up on a plane, it is quite an interesting concept.

A similar but less malicious case would be one of organising the seating on a train or in a cinema, but rather then biasing against families, biasing against other, unrelated groups.

For example - in a cinema, you usually do not wish to sit right beside another group when the rest of the seats are free. Still, you don't want to be that far from the center.

How do you design an algorithm for this? How will it scale? How can you make the most number of groups happy with their seats? How do you avoid having individual seats that nobody wants?


Every major cinema I know of in San Francisco allows seats to be reserved on a first come first serve basis. All seats cost the same. It’s extremely simple, and while there might be some negative effect for the cinema whereby people won’t buy tickets for a showing if they see all the good seats are taken, but I strongly suspect that is far overwhelmed by the effect of people going to the cinema more often because they know they won’t have to show up early or worry about getting decent seats.


Cinemas where I am usually don't have allocated seats. Now that you've prompted me to think about it, that fact definitely makes me less likely to go.

Pay money and have a fair chance of getting a seat I'm not happy with? No thanks.


While not exactly the answer to your questions, problems of a similar nature ("how to most efficiently use a space for some or multiple purposes") can be solved using Golomb rulers [1]. I first learned about them when I played with the Distributed.net client to calculate optimal Golomb rulers. I feel they or a similar class of tools can be used as the answer to your questions.

[1] https://en.m.wikipedia.org/wiki/Golomb_ruler


> How can you make the most number of groups happy with their seats?

You mean, make the most money? Nobody cares about making customers happy.


Well if you get assigned bad seats you might not come back, for this sort of thing I would say happy customers would lead back into more money


>> You mean, make the most money? Nobody cares about making customers happy.

> Well if you get assigned bad seats you might not come back, for this sort of thing I would say happy customers would lead back into more money

There's a lot of space between making customers so unhappy they never return and making them happy. The smart capitalist will eagerly trade the happiness if his customers for more profit, so long as he doesn't hit a tipping point that destroys his business.


I heard of a story where Walmart did an experiment in one of their stores where they rearranged all of the displays, made the aisles wider, all sorts of things which made the shopping experience more pleasant and less stressful. Customer feedback was almost unanimously positive; people loved shopping at Walmart now, when previously it had been a hostile and Kafkaesque experience.

But they spent less money.

So Walmart put everything back the way it was.


Just like McDonalds with its seating that is just the right amount of uncomfortable to get the customer out of there ASAP after eating, but not quite that uncomfortable that they wouldn't go there at all.

I feel this goes on a lot in retail as well, eg music in clothing stores. Also Ikea with its mazes and the milk always being at the furthest end of the supermarket.

As someone who is quite sensitive to such uncomfortable situations, I'm at the end of the bell curve that finds the whole experience so unpleasant that I just avoid stores mostly.


Quantum computing won't require much extra power that python can't provide, the only heavy processing will be circuit generation which is (as far as we can see at the moment) fine to use python for.

In the sort term though there's a big place for languages like C, C++ and Rust for things like simulations which need to be done


Yes, there are two reasons that python is an ideal tool for quantum computing libraries at the moment.

- In the NISQ era [1], circuits have limited depth and size. It doesn't matter so much which language (or even algorithm!) you use when N<1000.

- Simulating a circuit is expensive, but all the heavy lifting can be delegated to highly optimized C code. The most expensive part of Cirq's simulation is (or soon will be) a call to `numpy.einsum` [2].

1: https://arxiv.org/abs/1801.00862

2: https://github.com/quantumlib/Cirq/blob/24638f234704686c4bb6...


As someone who read this with an editor full of OpenCL kernels, I think apple must really have missed the point of these sort of frameworks - heterogeneous computing.

If I wanted the best possible speed, latest features ect. I would write multiple back ends in things like CUDA.

I choose OpenCL because I can develop code on my Macbook pro, and run that on a computer with a discrete GPU on a different operating system, and have a fair amount of confidence that it would work.

:/


> I choose OpenCL because I can develop code on my Macbook pro, and run that on a computer with a discrete GPU on a different operating system, and have a fair amount of confidence that it would work.

That was the promise, but it never became reality. When writing kernels for real-world applications, OpenCL breaks down in numerous ways. The result is usually neither stable, nor portable, nor fast and a pain to work with. There was never OpenCL support for 3rd party developers on iOS.

You say you are writing OpenCL kernels on a MBP and they are portable, maybe you got lucky? Lots of comments I see on the deprecation on OpenCL seem to come from people who like the idea of having OpenCL (and its promises, which are awesome), but never had the awful experience of actually working with it.

I remember the open letter from the Blender developers on the sad state of OpenCL support on Mac (http://preta3d.com/os-x-users-unite/) from 2015. Some GPU vendors (AMD, Intel and Qualcomm) continued to put resources to better OpenCL support over the last couple years, but maybe too little, too late? It seems at least Apple had already given up on OpenCL by the time of this letter (and moved their resources completely to Metal), as nothing new has happened for OpenCL since then.

I'd prefer if we had a working OpenCL on many platforms. As we don't, especially not on Apple platforms, the step of deprecating it is regrettable, but at least honest.


I know that Apple is commercial organisation and not a charity but projects like Blender bring a lot to the platform.

It would be great to find out later that Apple had reached out to the Blender dev team with a strategy on how to move to either Metal or a Vulcan/Metal adaptor.

Personally I was thinking about getting an eGPU just for Blender use. It would be a shame to have to leave macOS just to run Blender.


Agreed, I am in a similar situation. This is very sad. Also, while OpenCL is a bit verbose to interact with directly, Vulkan compute shaders are much much worse. I realise that at some point I will have to start using it, but I'm not looking forward it.


>because I can develop code on my Macbook pro, and run that on a computer with a discrete GPU on a different operating system, and have a fair amount of confidence that it would work.

I'm not an OpenCL programmer by trade, but I have dabbled in it (Wrote an AES decrypter in OpenCL) and I have never found this proposition to be true.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: