Do you believe that a completely in-the-open forum would be manageable? By this, I mean things like:
- Everyone gets to see...
- who does the up/down votes.
- what any moderators' decisions/actions are
- what the algorithms for viewing real estate
- etc.
Some other aspects:
- The "forum" (and all inter-user communication) lives on a block chain-like merkle-DAG.
- The real estate algorithms would be customizable, i.e. you have varying views on the same data.
- The "forum" would also act as chat rooms, todo lists, picture galleries, and more...
This is a fascinating topic worthy of several blog posts. I've spent many years thinking about why HN and Reddit managed to succeed when all other communities failed. Much of it has to do with inertia, but part of it is design.
One of the central aspects of HN is that humans control it, and they have good taste. I don't think you can take either of those qualities away without sacrificing what makes HN good.
And part of that implies that any HN competitor needs to have those qualities in abundance – which means allowing them to wield real power, and not being constantly second-guessed by the community.
It's a tradeoff, and it's the "good king" problem. You want someone in power who wants to do the right thing, and who is capable of doing it well. But that's extremely rare.
Dan is close, and we're all lucky to have him. Scott is a close second. But their weakness is that they don't really participate on HN anymore. Neither of them post here, and we don't get to feel connected with them like we felt with pg in the early days.
I think I'm not alone in missing that connection. So my theory is that if another site springs up with similar characteristics to HN, but with actual humans running it – people you can actually strike up a conversation with – then those who are interested in good conversation will flock to it.
That requires being capable of executing that plan, which is the tricky part.
Thanks for the extensive response. There are a couple points I'll break it down (I'm also loquacious on occasion):
> One of the central aspects of HN is that humans control it, and they have good taste. I don't think you can take either of those qualities away without sacrificing what makes HN good.
What if the user could "control it", i.e. could pick and choose algorithms of how to present views of the same data? Since everything is in the light, you could also share these algorithms - just like you share a collection of browser extensions or editor color schemes, etc. So you could have the "HN algorithm" which has the characteristics you mentioned with the slow-ban, and upvote/downvotes.
Heavily weight certain user's tags (think of voting up/down in terms of raw tags, and the repercussions of voting as an interpretation of those tags). These users would be who "HN" is - in this case the two that you're mentioning. You could also incorporate AI/ML opinions: Sentiment Analyzer A/B/../N, Some Other ML Analyzer, Bob's Analyzer. The main thing is that you have a way of associating identities with opinions, and then creating a projection algorithm that projects stories based on those identities and opinions.
So interestingly, it would be humans running the platform, but enabling each individual user to create his/her own projection algorithms.
> I think I'm not alone in missing that connection. So my theory is that if another site springs up with similar characteristics to HN, but with actual humans running it – people you can actually strike up a conversation with – then those who are interested in good conversation will flock to it.
At some point, won't the volume of interaction with any individual human exceed the ability of that person to respond? Won't the mentions, responses, etc. eventually be too much to handle? Also, isn't the up/down voting what determines in this site what the definition of what the "good" in "good conversation" is?
> That requires being capable of executing that plan, which is the tricky part.
Darn skippy. The characteristics I mentioned above I've already got well grounded with ibGib and it's taken quite awhile. It's a merkle DAG-based open-data design that I've created. Only I didn't create it to be a blockchain, as I didn't know how blockchains worked and was only vaguely aware of Bitcoin's existence. I created it to be a distributed microservice architecture (again, I didn't know the term microservice...I came up with "autonomous service" 15 years ago - still have the whiteboard). I wanted a SuperMemo-like learning algorithm, but the ability to have all of the aspects of the algorithm measurable (not an easy thing), to maximize the learning process. I've been shaving the yak ever since and it turns out to be like some understand blockchain graph data stores to act.
It's not that your idea doesn't have merit. It does. I went down this path myself, in the beginning.
But if you game out the implications of this, the conclusion is that everyone will see a different front page. And that has a bunch of subtle implications.
Reddit tried it. It could work. But it makes for a divided community, or set of communities. You see this with the various subreddit wars.
Do you have an email I can chat with you more about this? Hit me up if you're interested.
I've started to respond a little bit to your conclusion of seeing different front pages. Note that the site seems to be a little slow at the moment (I'm not much of an optimized coder, as the underlying tech is hard enough - premature optimization and all that).
> protocol designers to fine-tune incentives and punishments
This is an interesting phrasing of one of the questions I have about the "decentralized" aspect of cryptocurrencies. I get that the context of the term is that it attempts to bypass the centralization of a fiat currency. But isn't it still centralized to those who develop the protocol itself?
Or is the agreement of the community a requirement of the protocol?
My heuristic seems to be that the precise protocol implementation decisions are analogous to the precise laws enacted by the centralized governments who provide the value behind mainstream fiat currencies.
- is the agreement of the community a requirement of the protocol?
Yes. Always with a choice of creating your own rules and convincing other to switch to them.
Decentralization in this context would mean different rules for every participant, which is not something that we want - an extreme example of individualism where everyone has different beliefs, so a group cannot accomplish anything.
You and the other commenter both quoted the same aspect about agreement of the community, so see my other response.
> Decentralization in this context would mean different rules for every participant, which is not something that we want - an extreme example of individualism where everyone has different beliefs, so a group cannot accomplish anything.
That is quite interesting, thinking of extreme individualism. But in thinking about that, isn't that what the entire ecosystem of cryptocurrencies is currently doing, and any one particular cryptocurrency itself is a point of centralization?
Again, I'm just tugging at a thread here, trying to understand how it's supposed to be even conceptually possible to be "decentralized". It just seems to me that we are talking about a different form of centralization - which may be a good thing! Possibly centralization of choice, as opposed to centralization with regards to physical location (i.e. which country you live in).
Yes, I instinctively agree with you. But what about sink investments? Also, what about borderline protocol implementation improvements? Death of a thousand papercuts...that kind of thing?
Does keeping the algorithm source in the open, which allows for forks when opportunity presents itself...is that pretty much the optimal strategy against any transaction protocol being abused?
"Reworking" identity from the ground up as OP suggests is actually one of the goals that I've been working on with ibGib. No one really cares, but I'm going to describe some of the more interesting (to me) aspects of it, to bounce it off of you and others here.
First, ibGib's structure is like a block chain. I've been developing it for a long time, and I had no idea what a block chain was, and the like. But an ibGib's structure is like this:
* ib - unstructured text, like a name.
* often provides data or metadata for convenience per use case, i.e. data is just in the address, without loading entire record.
* ib + gib (ib^gib) is a "content address", but I think of it as like a memory pointer in an infinite memory space.
* Currently sha256 but that is metadata and can be specified in the data section.
* data - internal data, like a "value" or "content" of the record.
* rel8ns - named "merkle" links to other ib^gib.
* special rel8ns include...
* "past" - provides a linked list of mutations
* "ancestor" - provides linked list of forks
* "dna" - provides event-sourcing-like complete history of how to build the record.
So, it's effectively like a tree-version of a block chain, or a distributed (and scalable) block chain. Or if you're familiar with IPFS (which is where I learned the term "merkle"), it's like a merkle forest. (I've been working on ibGib for 15+ years though - had never heard of IPFS either, but I digress). Basically, you can think of the entire thing as self-similar git repos, but for anything - not just code (currently working on VCS use case for it, which is why I've taken the code off of GitHub. You can see my current "issue" for it at https://www.ibgib.com/as-chat/version%20control%20in%20ibGib...).
So this works with identity in a different way, in that each record is internally associated with multiple identity ibGibs. For the above example, check out the "identity" key in the "rel8ns" section. So, each individual datum is associated with _multiple_ identities for multiple things: users, nodes, sessions, etc. The piece I'm working on right now (in the active process of whiteboarding/coding at this very second) is the public key infrastructure "replacement". Because the data has this entire integrity chain, you can do different things for verifying provenance.
The way that you "prove" who you are is similar to the current SPHINCS algorithm (https://sphincs.cr.yp.to/ or ), which is an ever-expanding many-times hash-based signature scheme. In my algorithm though, you can create "keystones" which act similarly to public/private key pairs. Each stone has a list of hash challenges and the specs of the challenge difficulty. For example, if I have a stone of 100 challenges, the stone may say that a valid challenge requires a minimum of 5 challenges to be answered. The challenges are based on 1-way hashes (recursively called with a depth that is included in the params of the stone). So, when you first communicate between nodes, you provide a public global stone, that is replicated, e.g. to a "public key server" analog or wherever. In the initial contact between any two nodes this global stone is challenged, and if successful any future communications between the two nodes works on a private stone (created also in the handshake). Then, each transaction - in the form of ibGib data structures - is proven in the future using that private keystone. The ibGib internal integrity allows for integrity of the data exchange, as it's basically hashing the entire communication for verification.
And so, identity is established among nodes, and all data is verifiable. It's very tricky to really try to "nail down" the provenance once you get multiple nodes involved, but even if there is a known mistake, that is where another aspect of the data comes into play: non-monotonic (append-only) data.
Again, this is like a version control repository for your data. This leaves a full audit trail, yada yada yada, it's really neat. I've typed enough for people to ignore anyway. If anyone is interested, ask about how this affects identity with users AND IoT devices AND AI! Ah well. At the very least, the website is instructional for navigating around merkle forests.
Although, for a Vimperator replacement because of the breaking FF/Vimperator upgrades, I've found VimFX to be pretty awesome. I had to write some configuration scripts to get d/D to drop tabs like Vimperator, but other than that it works very well.
I'm looking forward to more genuine MFA. For my site, I'm experimenting with the ability to identify yourself with as many email address identities as you want (in the future the plan is to add more types including oauth, sms, etc.). If you're a regular person, you can just use one. If you're cagey, maybe two or three. Straight up paranoid, how about 10?
The point is that you are basically using an extensible claims-based approach to identity to create "aggregate identities". In the case of a beginner user, it just looks like "my account". More advanced users can add more security as necessary.
So instead of hacking 1 email/account they would just hack 2 or 3? I don't think that is adding any real security as those accounts would still just be protected by regular passwords. It makes it a tad bit harder for a hacker but not prohibitively so, because if they got the credentials to your first account then the others are probably not too much harder.
The real power of 2FA is having the code generated by you, the human, via your hardware device or software physically controlled by you and not another automated machine.
> So instead of hacking 1 email/account they would just hack 2 or 3? I don't think that is adding any real security as those accounts would still just be protected by regular passwords. It makes it a tad bit harder for a hacker but not prohibitively so, because if they got the credentials to your first account then the others are probably not too much harder.
That's certainly one of the thoughts that I had originally! But if you look at the details, perhaps it will become a bit clearer for you: Each of my email accounts are themselves protected by 2FA, so "those accounts" are not just "protected by regular passwords".
You can have email accounts with multiple email providers, e.g. gmail, outlook, etc. So, depending on how your email account gets compromised, this gives you additional layering of security. If mail provider X has a security breach, no big deal, because you also are using provider Y.
More generally, this can be seen with any factor in authentication, i.e. a claim. If any claim X is compromised, by any particular attack vector, then you also have Y, Z, etc. in play, depending on your security vs. convenience configuration.
And as I stated, email is only one of the avenues used to provide evidence for a claim. In the future, Oauth(2) tokens, sms, etc. The point is that it's an extensible mechanism for genuine MFA, instead of hard-coding in the "2" in 2FA. And that diversity is where the "real power" of multi-factor authentication comes into play.
This really does just seem like 2FA with extra steps.
You can't add N factors to multi factor authentication by adding more accounts. That's just lightly strengthening the first factor (something you know which is a few different accounts) with a splattering of the second factor (those accounts rely on something you have such as your phone). The third factor of something you are doesn't even come into play in this solution.
Having 2FA set up for the account in question makes it reasonably secure. Relying on a second account that also has 2FA enabled does not make it twice as secure. It might make it slightly more secure but not by a lot. It's even likely that the second account is using the same device for the second factor as the first account which negates any added security.
The best you can do in a scheme like this is shift the trust based security to second entity. It's the same level of security but just handled by something you might trust more. (Google/Facebook vs some random website I had to make an account for).
> Relying on a second account that also has 2FA enabled does not make it twice as secure.
This is an absurd statement that I didn't imply, but perhaps you inferred?
> The third factor of something you are doesn't even come into play in this solution.
As I've said, the point is to allow for additional claims to be given. "Something you are", i.e. biometrics, is certainly "in play" in this solution. It is yet another claim to add to establish an identity. The point is that the identification is extensible, and that it's left to the end user to make the opinions that you're depicting rather insouciantly as some kind of "absolute truth", when what we're actually talking about is trade-offs with security vs. convenience, as well as defense-in-depth.
> It's even likely that the second account is using the same device for the second factor as the first account which negates any added security.
You're assuming that the attack vector is only at the end device. Of course diversification of hardware like a keyfob or smart card is an added layer of defense. But that doesn't mean that there is no value in multiple identities from the same device. It all depends on the specifics of how your device is compromised, or even if it's your device that is compromised in the first place. As I said, what if you have a single email address hacked or a single email (or oauth, or sms, or whoever) has a data breach?
> The best you can do in a scheme like this is shift the trust based security to second entity.
Creating your own user/pass scheme, or your own oauth server is certainly one of the options we have, so again this is not "shifting to a second entity".
I'm wondering if this is just trolling at this point? You're making simply outlandish remarks with numerous assumptions and with little regard to what I'm actually saying.
I must say that this is one of my primary use cases for ibGib. The point is to have a review system (and advertising) that is not superficial but rather the "review" lasts the lifetime of the product. When you buy an appliance, you ibGib it. This means you take pics of the machine, the model number, the guarantees, manuals, etc. Then periodically (or at the very least when it dies or has a failure) you ibGib _that_. It is basically big (& open) data with tamper resistant integrity (no deletes, hashed content, public identity, more).
This is a broad question, as ibGib is many things. To be precise, it is its own question and answer, so the answer to this would be "ibGib". This would encompass your definition of ibGib, Bob's definition, etc., but this would probably overly pedantic. Basically it's different things and has many use cases.
As for ibGib WRT software, it's an engine/architecture that I'm implementing (https://github.com/ibgib/ibgib, https://www.ibgib.com). It's probably easiest to think of the engine as a graph database (but it isn't) and the web app as one interface to the engine. The data store architecture has only four fields: ib, gib, data, and rel8ns. The ib is user-controlled variable "name"; the data is internal state as a key-value store; gib is a sha-256 hash of the ib, data, and rel8ns fields; and the rel8ns is a list of named relationships to other ibGibs. So the ib+gib (ib^gib) acts as a content-addressable URL to the ibGib itself. The rel8ns turns the graph into what is now thought of as a merkle graph - or possibly forest, since the rel8ns allow for multiple single graph paths/projections to be created.
So any ibGib has internal data and relationships to other ibGibs maintained via ib^gib pointers. Since these pointers contain the gib hash, this provides integrity and verification of the structure. I've seen a lot of similarities in ibGib's structure with things like IPFS and others, but unlike such systems, ibGib is not file/folder-centric. Those are like two specific roles of ibGibs: files are focused mainly on the internal data, and folders are focused on the relationships (but they have only one type of relationship: hierarchical/containment).
> What does the name represent?
That is an extremely interesting question for me personally. Suffice to say that the acronym was first conceived with the phrase "i believe God is being" (I was agnostic borderline atheist at the time). Since it has a religious context, I avoid speaking too much to it in others' forums. (But for me, it's about logic.)
I'm planning on doing a Show HN here in the future once I have a couple more features implemented! I'd love to talk to you (or anyone) about it in more detail if you're interested. :-)
> Why didn’t I call myself? Mostly, because I hate making unsolicited calls of any kind, a phobia that I admit isn’t entirely rational and that often causes inconvenience.
Interesting. I hadnt thought of a reservation as being unsolicited. What about online reservations that are more pubsub-like?
Yes, they both seem to be focused on the differentiable inter-agent communication aspect. I wonder if this is related to the recent articles on how honey bees communicate to each other the learning required for pulling strings, rolling balls, etc. (e.g. on HN https://news.ycombinator.com/item?id=13723645)