Hacker Newsnew | past | comments | ask | show | jobs | submit | CaHoop's commentslogin

poly.ai | London UK | Multiple Roles | Visa-Sponsorship | Hybrid working |Full Time

The studio team at PolyAI are hiring for backend and fullstack roles:

* Software Engineer – Platform (Graduate/Mid-level) https://poly.ai/careers/software-engineer-platform/

* Software Engineer – Fullstack https://poly.ai/careers/software-engineer-fullstack/

Here at PolyAI we have built a developer platform to create voice-agents (think google assistant in the call center - but better!). Now we are building the studio to allow for anyone to build and maintain complex voice-agents (less tech-savviness needed). It's a big new direction for the company, with loads of greenfield work coming up!

We are a fairly small team of developers (currently only 7) and we are planning to expand to 12-16 in the coming months.

I really love the culture we have in our team. We are a friendly bunch who are passionate about what we do and regularly have in-person socials.

Have any questions? you can leave a comment on this post!

Technologies used: python, go, kubernetes, react-redux


I saw this paper on reddit.com/r/machinelearning - usually the more interesting ML papers are posted there.

Additionally, there is http://www.arxiv-sanity.com/ which sorts new machine learning papers by popularity.


This seems like it took very heavy influence from http://getdango.com/emoji-and-deep-learning/

Or actually it could be the opposite, seeing as this post is about a year old


I used a bloomberg terminal at work. Yes it is a bit annoying if you dont use it everyday, but it is really nice when you get used to it. It has nice chicklets which have more action than a mac one.


Thanks for the ideas!

The data we used is not on our repo. It came from a database given to us by our university, and there was confidentiality issues when it came to making it publicly available.


That video was just a mock of our product. If you would like to see the results of the performance of the models, have a look at the slides we made for the project (at the bottom of the post). The best performance we got was about 0.7 r.


0.7 r is a bit optimistic I would say. We only got 0.7 for people we have been able to train on. For completely different people it learned nothing with audio spectrograms. I speculate that convolutions on raw audio would be better.

Estimation from video seemed much better. There we actually got 0.5 r on unseen people, which I find very promising.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: