There is a lot of potential in this. Not just for government democracy, but for also introducing democratic elements into tech/AI policy, when that tech has impact comparable to many governments.
I worked in tech, and after some formative experiences, shifted to working on helping ensuring ensure that tech's impact on society can serve the public interest. But that leaves the question of what "what is the public interest"?
Sortitition / lottocracy / deliberative democracy / mini-publics all roughly refer to the same way to answer that question — providing a representative microcosm with the space to deeply examine an issue, and come to a set of recommendations and decisions on it. Unlike with electoral democracy, it's faster to spin up and experiment with, and it's harder for bad actors to entrench power (elections can be useful, but they're one of many tools in the democratic toolbelt).
I've now started an organization focusing on applying this and other democratic paradigms to decision-making about AI (https://aidemocracyfoundation.org/) as a way to solve a variety of challenging governance problems across the AI stack.
If you're curious about it, our ICML paper goes into more detail: https://arxiv.org/abs/2411.09222 .
I think it's interesting to think about this question of open source, benefits, risk, and even competition, without all of the baggage that Meta brings.
I agree with the FTC, that the benefits of open-weight models are significant for competition. The challenge is in distinguishing between good competition and bad competition.
Some kind of competition can harm consumers and critical public goods, including democracy itself. For example, competing for people's scarce attention or for their food buying, with increasingly optimized and addictive innovations. Or competition to build the most powerful biological weapons.
Other kinds of competition can massively accelerate valuable innovation.
The FTC must navigate a tricky balance here — leaning into competition that serves consumers and the broader public, while being careful about what kind of competition it is accelerating that could cause significant risk and harm.
It's also obviously not just "big tech" that cares about the risks behind open-weight foundation models. Many people have written about these risks even before it became a subject of major tech investment. (In other words, A16Z's framing is often rather misleading.) There are many non-big tech actors who are very concerned about current and potential negative impacts of open-weight foundation models.
One approach which can provide the best of both worlds, is for cases where there are significant potential risks, to ensure that there is at least some period of time where weights are not provided openly, in order to learn a bit about the potential implications of new models.
Longer-term, there may be a line where models are too risky to share openly, and it may be unclear what that line is. In that case, it's important that we have governance systems for such decisions that are not just profit-driven, and which can help us continue to get the best of all worlds. (Plug: my organization, the AI & Democracy Foundation; https://ai-dem.org/; is working to develop such systems and hiring.)
making food that people want to buy is good actually
i am not down with this concept of the chattering class deciding what are good markets and what are bad, unless it is due to broad-based and obvious moral judgements.
At first glance it feels like the most effective way to game this system is to grind user credit through aggregate low polarization support on fairly neutral low impact posts, then strategically 'spend' on higher profile polerizing posts. Is that a fair 'red teaming' observation?
Yes I think this actually could work. Community Notes has a basic reputation system: users need to "Earn In" by rating notes as "Helpful" that are ultimately classified by the algorithm as helpful. Once enough attackers earn in, they can totally break the algorithm.
Breaking it is not as simple upvoting a lot of, say, right-wing or left-wing posts though. The algorithm will simply classify all the attackers as having a very positive or negative polarization factor, and decide that their votes can be explained by this factor.
What would work is upvoting *unhelpful* posts. I have actually simulated this attack using synthetic data and sure enough it totally breaks the algorithm. I write about it in this article: https://jonathanwarden.com/improving-bridge-based-ranking/
Oh hey, I came across your Social Protocols groups while doing my regular rounds for Polis-related projects a few months ago, when I found Propolis! Was trying to figure out why your name was familiar-ish :)
There's also a Polis User Group discord: https://link.g0v.network/pug-discord It's pretty low-key lately, but high density of potentially-aligned ppl. I am hoping to restart the weekly open calls for prospective Polis facilitators and self-hosters, in case you're interested to log in.
Thanks for your posts by the way! I am jealous of your output -- I tend to have a few calls/meetings about Polis per week, but am not so great at producing clean artifacts like this :)
The reasoning was: coming up with (and answering) yes-no questions is more effort and a higher entry barrier for participation than just posting anything and having up/downvotes - like in a social network. Requiring this formalization of all content on a platform creates an entry barrier, e.g. people need to formulate what they want to post as a yes-no question. At the same time, it disallows content, which does not fit the yes-no question model.
Our big insight was: We can drastically simplify the user interaction and allow arbitrary content, but keep the collective intelligence aspect. That's achieved by introducing a concept similar to community notes, but in a recursive way: Every reply to a post can become a note. And replies can have more replies, which in turn can act as notes for the reply. Notes are AB-tested if, when shown below a post, change the voting behavior on the post. If a reply changes the voting behavior, it must have added some information, which voters were not aware of before, like a good argument.
If you can provide a script that takes in an HTML file and provides an image ready for rendering, that would be amazing. Then I can automatically take any website and have a cron job that dumps the result into a shared dropbox link where it can be used by the screen.
> If you can provide a script that takes in an HTML file and provides an image ready for rendering, that would be amazing.
Yea, that's something I have been trying to build, but it's surprisingly non-trivial. There are a bunch of headless browser options, but I haven't found a good way to tell them: "Render the page in X width and Y height and then take a screenshot".
That seems like a problem that should have 100 open source solutions for it, and I am sure there are some that work really well! But I personally haven't found one yet.
At least that is what I use to do for screen testing for some of our low-hanging-fruit QA.
At some point I rewrote it in puppeteer and it was as simple as the above line.
The screenshot results in being the X/Y size.
I'd be interested in why this doesn't work in your usecase.
I made something almost exactly like this before. I needed to convert svgs to pngs and have them display the same way they looked in the browser. It turned out that spinning up chromium and taking a screenshot was the easiest thing way to do that. I think I used puppeteer.
It feels fairly reasonable imo to specify something like "this uses phantomjs with the following screen size" and just say peoples work has to fit that.
It's worth knowing that, according to some recent research, taking a topical antibiotic like this might have permanent impacts on microbiome in your nose (and lead to drug resistance.)
Yes, but systemic antibiotics are substantially worse in this regard as they change your entire systemic micro biome, and a chronic sinus infection is a permanent highly negative micro biome in your nose. Sometimes it’s better to hit the reset button than endure misery for the rest of your life :-)
Just to add this, my ENT prescribed a nasal probiotic prior to my surgery due to staph risk. I used it for 6 months prior to the surgery and now I continue to use it because, either it's a placebo or something special, it seems to improve the airflow in my nose beyond what I'd expect from just squirting some buffered water up my nose. Additionally, I seem to have fewer head-colds that also seem to resolve more rapidly (I have two small children, so my house is literally an incubator for infections). This is the product: https://liviaone.com/collections/probiotics/products/probiot... in case you were curious.
Fascinating, this led to this projects which is pretty interesting. https://www.modos.tech/blog/modos-paper-monitor
Seems like a very exciting effort and product but I'm not sure if it's still active.
You can use the "Make spoken audio from" and "play audio file" actions available to Shortcuts. I was able to get it to play from the Mac dock using this method.
> Sure, but that's irrelevant. Whether or not the user understands the answer they posted is not the concern of the site.
Well, that's unfortunate. Then again, I guess that's a logical conclusion of the "safe harbor" for serving any user-submitted content: Stack Exchange only does the most cursory moderation, and the rest is caveat readator
I worked in tech, and after some formative experiences, shifted to working on helping ensuring ensure that tech's impact on society can serve the public interest. But that leaves the question of what "what is the public interest"?
Sortitition / lottocracy / deliberative democracy / mini-publics all roughly refer to the same way to answer that question — providing a representative microcosm with the space to deeply examine an issue, and come to a set of recommendations and decisions on it. Unlike with electoral democracy, it's faster to spin up and experiment with, and it's harder for bad actors to entrench power (elections can be useful, but they're one of many tools in the democratic toolbelt).
That thinking lead to https://www.belfercenter.org/publication/towards-platform-de... . That basic approach, has been somewhat picked up by Meta (https://www.wired.com/story/meta-ran-a-giant-experiment-in-g...) and Open AI (https://aligned.substack.com/p/a-proposal-for-importing-soci..., ~ leading to their democratic inputs and collective alignment work).
I've now started an organization focusing on applying this and other democratic paradigms to decision-making about AI (https://aidemocracyfoundation.org/) as a way to solve a variety of challenging governance problems across the AI stack. If you're curious about it, our ICML paper goes into more detail: https://arxiv.org/abs/2411.09222 .