Hacker Newsnew | past | comments | ask | show | jobs | submit | tzahola's commentslogin

Moore’s law is over.


>"Food liquid" or drink (a liquid that is specially prepared for human consumption) is a part of the culture of human society and not only a substance which addresses the basic human need to drink.

What a weird sentence!


More than a few paragraphs in there read like a marketing spiel but some of it might just be an author in the field using terms of art and other verbage that wanders across their desk. Reads naturally to them but totally alien to muggles.


This sentence was certainly not emitted by an alien. Its provenance could only be the humanoid cortex.



Every sentence in that article is an act of wilful vandalism inflicted on the english language.


I love obscure pages like this... too technical to be removed, too obscure to attract users who would edit the content into a more readable form.


D contracts are arbitrary code snippets. So they're Turing-complete, therefore it's still uncomputable.


My hypothesis is that some people blindly upvote anything that has the word “React” in it.


Yep. The plebs are onto us. Time for a crash ;)


Daily reminder for data scientist and machine learning types: fill your pockets while you can, because machine learning bootcamps are on the horizon!


There's been data science/machine learning bootcamps around for awhile (Galvanize/Metis being common examples in San Francisco), but apparently job placement is not in a good place (as with normal bootcamps).

Indeed Machine Learning/Deep Learning has become much more accessible thanks to the number of free guides such as this. But that means data science job placement will become more difficult as competition increases, with more gatekeeping/requirements (e.g. Masters/Ph.Ds)


The issues I've heard from a few people in hiring is that there is a surplus of junior data scientists from these camps and a shortage of senior data scientists to manage them. Problems not dissimilar to tech hiring in general, but companies need a lot more SWEs than data scientists.


Depends what the company is doing.

Most companies are going to utilise ML to some extent. Once technology and tooling improves they'll need boots on the ground engineers and not labs with R&D teams


Masters/PhDs can be boots-on-the-ground engineers too.


Honestly, unless these ML bootcamps are extensive courses on calculus, linear algebra, and statistics and not just "Here's k-means. Memorize it" I doubt they'll harm the market for grad school educated data scientists.


I'd like to offer a counterpoint. I attended one of the machine learning bootcamp mentioned above, and it was transformative for me. I got hired within a month, doubled my salary to over 100k, and landed a job that I enjoy and find intellectually stimulating. All this while having little to no technical experience (only math I took in college was intro to stats, and my pre-bootcamp career was in a non-technical capacity).

I completely understand why there is such a stigma around bootcamps. Nobody can deny that they don't afford the same depth that you'd get at a "real" program. But they can be amazing for career switchers like me, who had no real direction in college. Don't look down your nose at them.


Agreed


Yes, deep learning is what web-development was twenty years ago (and now everybody and their mother can build a website).


Too bad ML as a service is already largely cornered by $FANG


Too bad ML as a service is already largely cornered by $FANG

Wat?

Neither Facebook nor Netflix offer outsiders access to their ML platform, and you completely forgot Azure, which IMHO has the most mature offering of the big 3 in this space.


You cannot learn machine learning or deep learning in a few months. You can learn to copy what these guides do, but if you want to do something slightly different you will feel you know nothing (because you actually probably don't know anything about the maths behind why the things works, so when you want to change them you don't know how)


I don't deny that knowing the math / theory is useful, but wonder if we sometimes overestimate the degree to which it is essential. For example, backprop with SGD is a good foundation for many, many, many applications of NN's, and pre-built implementations exist that let you use the technique without understanding the details of the math. And with those tools, you can experiment with many different combinations of features, different architectures, etc.

Of course understanding the theory will be helpful in knowing which architectures are most likely to be productive and what-not, but this whole field is very empirical anyway. So if your experimenting is a little less guided my intuition rooted in theory, that's not exactly the end of the world.


I wasn't talking about back propagation. But sometimes you need to change the loss function, or the shape of the network. Or combine two models. Back propagation is the same for those examples, but not other math stuff.

The only thing that makes you think it is easy is because you are just copying what others have been doing and you don't change anything. Try to go beyond that and you will change your mind quite quickly.


Reason you need an education in this theory is twofold. How to fix something that is broken in limited time? How to assure this model is reliable? To confidently answer this from a place of reason derived from theory is going to be the real value.


Sure, but it's a continuum, not a binary dichotomy. Just like you can do more with your car if you have degrees in mechanical engineering and fluid dynamics, but a person with nothing but a high-school diploma can upgrade a camshaft.

The point is, you can do a lot of very useful things with ML, without needing the entirety of the theoretical underpinnings. Of course you can't do everything but not everybody needs to be able to do everything.


So as I said you can copy what others do. That is fine, but you don't k ow deep learning, you know how to apply it based on examples, which is is fine for a lot of things.


Have boot camps noticeably suppressed wages for software engineering in general?


While I welcome the privacy implications of this, I don’t see why recovery wouldn’t be possible if the T2 chip and the SSD is intact?


In that case, target disk mode presumably works and you don't need a special recovery port?


T2 and SSD can be intact and other stuff in the motherboard may be fried. Would target mode work in that case?


I guess it depends on what exactly is "fried"? I assume that the CPU is needed for target disk mode, but maybe a faulty GPU wouldn't matter?

But to be honest I have no idea -- I'm a software developer and I know very little about hardware.


> In that case, target disk mode presumably works and you don't need a special recovery port?

No good to you if the power connector on the logic board is screwed but the chips are fine.


Don’t rock the boat!


Just learn the fundamentals in a platform-agnostic way and you’ll be set for life:

- 3D geometry (lines, planes, implicit and parametric surfaces)

- basic splines (Bezier, Hermite, maybe NURBS)

- matrix transformations

- rotation via quaternions

- projective 3D geometry

- shading (diffuse, Phong, etc)

- advanced tricks (shadows, reflections, etc)

Once you nail down these topics, the rest is just learning the specific API quirks (memory model, synchronization, etc), be it OpenGL, Metal, DirectX, Vulkan, or whatever comes next in ten years.


How does this connect to the parent comment??


If WebGL were a 2D API, it would have made it impossible to do this kind of work. Also VR in general.


It would always be possible to project a 3D world onto a 2D plane in any 2D graphics API, which is exactly what the projection matrix does in OpenGL.


The projection matrix transforms the 3D world into a 3D world in screen space. Two of the dimensions are the screen coordinates, and the third is depth into the screen. The depth is used for Z-buffering (hiding the stuff in back), and for fog and focus effects.


Depth in OpenGL is a separate 2D buffer. You can use it to approximate the effects of 3D space, but it’s still fundamentally a set of 2D operations.

Of course, it’s all fundamentally just bits on a heap, so past a certain point the argument becomes academic.


That, I sort-of agree with. Layering isn't unique to 3D, so it's debatable whether the incorporation of a depth buffer makes OpenGL a 3D API.


It's not layering. It's depth. The depth buffer will work for cases where two objects cross in Z. Unlike the painter's algorithm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: