Personal Opinion of @risottobias@tech.lgbt

Since having many threads about this is inneficient and confusing... here's all my thoughts gathered in one place.

If the common sense part needed saying...

Foundations and cooperatives have no business interacting with the following things as they'll quickly damage their reputation:

  • bitcoin/blockchains
  • artificial intelligence
  • surveillence capitalism

corporate-ish big techy things like Hachyderm already have a hard enough time on the fediverse separating out their personal identity from their work identity. As much as folks would like it, the prejudice is that decisions by someone who works at a big tech firm will be that what that big tech firm does is ethical.

Nivenly should not work with Haidra or AI Horde. Here's why.

Foundation vs for-profit

For-profit companies like Microsoft (GitHub), Google, ChatGPT, etc, might find training AI models off of crawled datasets to be perfectly legal or ethical.

Users on Mastodon, folks like artists, writers, and other creators, certainly do not.

It's important to step outside of a technical career bubble and consider that what might be a path to make money for a venture capital firm or a startup, is certainly not a path forward for a charity, foundation, or cooperative. Especially one that wants to maintain good standing with a federated community.

Cooperatives are all about ethics, as should hachyderm's mastodon instance or Nivenly at large.

Nivenly should not behave like a for-profit. There's very little incentive for a non-profit to help train AIs, and much more to gain from getting Nivenly's help.

Appearances

The way that https://github.com/nivenly/community/discussions/2 is phrased, especially the final comment, gives concern.

It makes it seem pre-destined, like there's some discussion with the creator of Haidra that happened in closed doors prior to, that somehow set in stone that this would happen.

"Also Monday the 15th: opening the General Member Discussion" - this seems backwards and also pre-destined, the small sample size in the discussion already raised concerns that should have shut it down well before getting to a general member phase (Even though that should have been first)

Public Relations

Prior to adopting a technology for the foundation, it's good to search mastodon (and the wider web) for controversy about the subject.

It's good to ask about that beforehand. To ask artists and writers what they feel about it.

I bet you can guess their answer.

If you're pro-AI, you probably find them stubborn and intractible.

You probably already have a response in your head right now for how Haidra is different.

"Has Joined" the Nivenly foundation also sounds predetermined, like the community members cannot stop it, like it's part of an incubator project already.

AI Ethics

AI could be ethical, if:

  1. it were opt-in to the training set
  2. it contributed back to datasets that were only opt-in

Stable difussion (or contributing back to it), is not opt in.

Coin models / kuddos / working towards making a monetary system, for which folks have screenshots, is also not ethical.

Differences between a crawler and AI

A crawler for a search engine surfaces someone else's work for you to visit.

It cites the author. Ad revenue goes towards that author.

A crawler for an AI takes a corpus of data to generate new content.

It does not cite the author. revenue goes towards the person who generates it, or the person on behalf of.

AI-based scraping and garbage content generation is like a cheap clone of stackoverflow. It's worse fidelity than the original.

"embracing nuance"

continuing to entertain stealing artist's work

Code Now, Ethics Later

I don't think I can find where it was directly said on Mastodon or Discord, but something to the effect of "code now, ethics later" is definitely the antithesis of what should be happening here. That was not directly said by either Nivenly or AI Horde, but was a response/criticism to that effect. I agree with that criticism.

If something isn't completely above board, it should be scuttled immediately and quietly before it damages Nivenly's reputation further. Pushing for more discussion and "nuance" sounds like trying to make someone who has refused consent accept it, like tech's eponymous and ever-present pop-up "consent" windows.

See the threads for even more criticism of how this has been handled:

"Focusing on the specific issue at hand"

Nivenly and AI Horde's behavior is shady and work with Haidra and AI Horde should not be pursued further.

Specifically them.

And the way discussion around pulling in this AI project has gone.

Specifically this attempt at interfacing.

And frankly any future attempts would likely be muddied.

More broadly...

And more broadly any AI generation, for several reasons.

  1. it's clear that there's strong persistence from Nivenly and from the devs of Horde that this must go through / already has
  2. no guarantees of addressing ethical concerns of users "code now, ethics later"
  3. not having the process be shady would be good...
  4. doing basic research on how Mastodon would respond to something
  5. thinking like a foundation, not a corporation