Hacker Newsnew | past | comments | ask | show | jobs | submit | ulms's commentslogin

It seems odd that they would remove all of the binaries as well. Want to commit a readonly version? Fine. But what about now, when less technical folks might want to install TrueCrypt on a new computer?

The point is, now another entity will need to establish validity as being a trusted provider; there was no reason to rock the boat per se, and as someone who laughs at most things, it just... really isn't altogether funny, unfortunately.

edit: assuming that it turned out to be a joke, for the sake of discussion (though is there a difference at this point?)


They've always removed old binaries when they release new ones. They used to ban users from their forums when they asked for old binaries.

They've changed binaries too without bumping the version number and then re-released them unannounced.

This has been known for a very long time now. I'm not sure why people are surprised at the latest developments:

http://16s.us/software/TCHunt/TCD/readme.txt


Interesting. Admittedly I hadn't paid a lot of attention other than "get the latest release and install it", so I wasn't aware that they did any of those things. Thanks for clarifying.


What makes this different than something like Flowdock, for example?


(I work at Pie)

the key bit is that on Pie, everything gets a mini chatroom. so every discussion is focused. we're betting that as people share more and more stuff, they'll value group chat apps that help them organize and structure all these things.

on Flowdock, Slack, Hipchat, etc, people usually talk in a "catch-all" chatroom like "Product team" or "Marketing team" which (in our experience) gets messy quickly and it can get hard to share information there. it's often a mishmash of links, files, images and conversations.

So, Pie is great if you're in a team that shares a boatload of info: links, files, etc. gives you a message board layout as opposed to one huge stream.


I'm confused about why this feature continues to ship enabled by default. AFAICT the de facto standard is to have users opt-in to menu bar icons unless the app is only a helper or intended to be used solely from the menu bar - at the very least, remembering whether it was enabled/disabled in flags between builds/updates would be acceptable, IMO.


> I moved on to an "individual contributor" role with a company that provides higher level career opportunities that don't involve having direct reports.

Well, that sounds pretty slick. How's it working out? As someone who was just approached by management about a director position, and this being somewhat familiar ground, I'm concerned about the lack of expansion and technical progression as well. Haven't really seen many "no direct reports" situations in my neck of the woods, unfortunately... but it seems like that could be something to aspire for.


I was probably a bit negative with that reply (though I do think "X is our best engineer, let's have them manage others to make them more like X!" is far too common).

I'm enjoying the new (now 2 year old) role but there are also negatives -- I have less visibility into the direction of the group/company and "stuff happens" that I would've known was coming at the old role. I'm definitely getting to stay hands on, and satisfying the "big picture" itch with involvement in architectural and project planning discussions.

I think if you're considering that sort of move just make sure you find the management challenges interesting and be willing to invest as much time and effort into getting good at the new job as you have at your "individual contributor" role. Have a good relationship with some existing managers and directors and talk honestly about the role with them if at all possible.

I don't regret the short move to management or the move back, and I'd consider either in the future. It's important to understand that they are usually almost completely different jobs though.


The company I work for (based in NYC) offers exactly this kind of career track. Individual contributors are comparable with management in every way (salary, seniority, title, etc) but are not forced into management in order to continue advancing. I don't want to hijack this thread with an advertisement, but if anyone is interested in more information feel free to get in touch.


This looks neat.

The common hangup I have with these sorts of things though, is what's the most reasonable way to host the data? IANAL, but AFAICT I'm prohibited from running something like this from my residence because my ISP (Verizon) says I can't run servers or dedicated services from my home connection. So OK, I'll go ahead and run it on a DO or EC2 or Linode instance - well, provided I feel like massively overproisioning and thus paying inordinately for ample storage (what if I have terabytes of media?), or dealing with slow connections mounting/proxying to S3 (for example). On top of that, for anyone who has media they did not acquire legally, they are now breaking (more?) laws, hosting agreements, etc.

Just curious to see how anyone else manages it - because LAN XBMC/UMS/PS3 etc is always great, but if I want something more available, like this, or a private Roku channel, etc, it seems less of a real option the more that I consider it.


I'm a Comcast user, and while I know that I'm breaking the user agreement by hosting servers, I do it anyway and so far they haven't been bothered by it. My presumption is that clause is more of a CYA on their part that if you begin consuming terabytes of bandwidth hosting your own Youtube site or whatever, they don't need to invent much of a reason to turn you off. Frankly, I'm OK with ignoring it because it seems totally unreasonable in its current wording. What is a server? Sure, Apache is pretty cut and dry, but what about hosting a L4D2 game for three friends? What about that World of Warcraft patcher that acted as a Bittorrent client? How about those Javascript libraries that "crowd host" your website? As far as I'm concerned anything that sits on a port waiting for a connection is a "server" and its almost impossible to do much online these days without unknowingly running one.

If Verizon is more stringent on this, you do have a few options if you want to do it anyway. One of the easiest is by only running the server when you intend to use it. Set up a script that allows you to turn it on/off via email or tweet or something. Or setup a port knocker script that closes the port, only opening it for outside connections when you want to watch Archer or whatever. That will protect you from the usual port scans and such, and is likely all you need to evade Verizon's ire.

The increased bandwidth usage is a trickier problem that, depending on your location/ISP, paying for offiste hosting may be your only option if you intend to stream large amounts of media. This is likely only going to get worse as infrastructure continues to degrade and ISPs continue to sit on it, so I envision we'll all be looking for options here in the future.


Fair points. Now that you mention it, the CYA rationale makes a bit more sense. And yup - it's like you said, it's almost impossible to not be contributing to other Internet users in some regard most of the time, with peering really being the LCD of those situations. Hell, being the host in a Halo matchmaking game would be enough for them to nuke your service, by contract logic. Also, good idea on the tweet/etc - even a simpler option would be doing something like a text file on S3 that just flags it as on/off, and let the script poll that every so often. I think I'll do that.

Aside, but as an unreasonably paranoid person, I appreciate the candor WRT "screw it, I do it anyway" - this (personal media hosting/availability) specifically has been a pain point for a while, and with a decent pipe to the house and plenty of storage, it'd be a shame to let it all go to waste. ;)


As far as I'm concerned anything that sits on a port waiting for a connection is a "server" and its almost impossible to do much online these days without unknowingly running one.

Absolutely true. I'm on Comcast too, and technically both my Tivo and WiFi router are 'servers' because they accept incoming connections from the internet. I also run a regular server for a few things, and I've never had a problem, but it's all been for personal use and light-bandwidth. I think if I started streaming multimedia out of the house, I'd run the risk of setting off some bandwidth usage flags. But so long as your total in+out bandwidth is within their cap (250GB/month on my residential service) there's probably no problem.


I've never seen anyone getting in trouble for running a personal service on your home connection. It's all about the proportion, if you are running a service using up TB of bandwidth/month they'll probably make use of their terms of service but otherwise there's really no reason for them to cut you off.

And what's the difference between running a service like that or just having a public share on your router which a lot of routers come with these days? There really isn't one.

If you want to host it on a service just go with a dedicated server provider and not a "cloud" service which isn't really a cheap way to host a lot of data. You can get a dedicated server with 1TB of hdd, 4GB ram etc. from OVH for 10euro/month. And that's a lot of music.

Edit: Looks like I got ninja'ed there ;)


Agreed, that does make more sense. As TheCraiggers said, CYA seems to be more likely the more that I think about it.

I mean, if you're streaming BluRay 24/7, I'd imagine that's one thing - but for a normal person with a few hours of personal streaming bandwidth consumption per day, I'd hope they wouldn't even bat an eye.

The difference between running a service locally and in the cloud has pretty distinct implications in either scenario, not the least of which is overall cost. AFAICT, OVH is certainly well in the minority in terms of offering significant storage at a reasonable price. "Unlimited" storage hosts, like Bluehost for example, require that your storage use be part of the "normal operation of your website" (which I assume they mean "public use"), etc.


A good option is Hetzner hosting for about 60-90 USD / month.


> Docker runs fine with multiple processes in a container.

I think that, while most people realize this, it's important to highlight this fact again. Generally, the examples you see are "look, we can run this would-be-backgrounded-elsewhere daemon in the foreground in its own container!", when IMO this sets the bar a bit low for how people initially approach Docker. Containers have simplified my life significantly, and (again, IMO) their real power becomes evident when you treat them as logical units vs individual components.


Note that this was posted in late Oct 2013, whereas according to the HHVM blog[1] there were major stability, unit test, and performance improvements made to HHVM for a variety of frameworks (including Symfony) in mid December, thus this may be slightly out of date.

[1] http://www.hhvm.com/blog/2813/we-are-the-98-5-and-the-16


indeed .. the numbers should be even better now.


Absolutely. Years back, my wife bought me an AppleTV as a gift from Amazon, but had it shipped to an old address in the Bronx - whose current tenants happily signed for and took the package. When we realized it was delivered and signed for, we called them with the situation, and with literally zero hassle, sent us a new one to a new address, free of charge. I don't expect that this is common, and I would never expect them to do it for me again, but it won my customer loyalty more than any other company has thus far.


You have to boot into your recovery partition (Cmd+R on boot), then there's a menu option to set the firmware password, which will be active on the next reboot.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: