Risottobias

This is the set of random snippets / personal notes that I've decided to make pseudonymously public.

You can search em to find the ones you want :)

You can find me on mastodon at @risottobias@tech.lgbt

Personal Opinion of @risottobias@tech.lgbt

Since having many threads about this is inneficient and confusing... here's all my thoughts gathered in one place.

If the common sense part needed saying...

Foundations and cooperatives have no business interacting with the following things as they'll quickly damage their reputation:

  • bitcoin/blockchains
  • artificial intelligence
  • surveillence capitalism

corporate-ish big techy things like Hachyderm already have a hard enough time on the fediverse separating out their personal identity from their work identity. As much as folks would like it, the prejudice is that decisions by someone who works at a big tech firm will be that what that big tech firm does is ethical.

Nivenly should not work with Haidra or AI Horde. Here's why.

Foundation vs for-profit

For-profit companies like Microsoft (GitHub), Google, ChatGPT, etc, might find training AI models off of crawled datasets to be perfectly legal or ethical.

Users on Mastodon, folks like artists, writers, and other creators, certainly do not.

It's important to step outside of a technical career bubble and consider that what might be a path to make money for a venture capital firm or a startup, is certainly not a path forward for a charity, foundation, or cooperative. Especially one that wants to maintain good standing with a federated community.

Cooperatives are all about ethics, as should hachyderm's mastodon instance or Nivenly at large.

Nivenly should not behave like a for-profit. There's very little incentive for a non-profit to help train AIs, and much more to gain from getting Nivenly's help.

Appearances

The way that https://github.com/nivenly/community/discussions/2 is phrased, especially the final comment, gives concern.

It makes it seem pre-destined, like there's some discussion with the creator of Haidra that happened in closed doors prior to, that somehow set in stone that this would happen.

"Also Monday the 15th: opening the General Member Discussion" - this seems backwards and also pre-destined, the small sample size in the discussion already raised concerns that should have shut it down well before getting to a general member phase (Even though that should have been first)

Public Relations

Prior to adopting a technology for the foundation, it's good to search mastodon (and the wider web) for controversy about the subject.

It's good to ask about that beforehand. To ask artists and writers what they feel about it.

I bet you can guess their answer.

If you're pro-AI, you probably find them stubborn and intractible.

You probably already have a response in your head right now for how Haidra is different.

"Has Joined" the Nivenly foundation also sounds predetermined, like the community members cannot stop it, like it's part of an incubator project already.

AI Ethics

AI could be ethical, if:

  1. it were opt-in to the training set
  2. it contributed back to datasets that were only opt-in

Stable difussion (or contributing back to it), is not opt in.

Coin models / kuddos / working towards making a monetary system, for which folks have screenshots, is also not ethical.

Differences between a crawler and AI

A crawler for a search engine surfaces someone else's work for you to visit.

It cites the author. Ad revenue goes towards that author.

A crawler for an AI takes a corpus of data to generate new content.

It does not cite the author. revenue goes towards the person who generates it, or the person on behalf of.

AI-based scraping and garbage content generation is like a cheap clone of stackoverflow. It's worse fidelity than the original.

"embracing nuance"

continuing to entertain stealing artist's work

Code Now, Ethics Later

I don't think I can find where it was directly said on Mastodon or Discord, but something to the effect of "code now, ethics later" is definitely the antithesis of what should be happening here. That was not directly said by either Nivenly or AI Horde, but was a response/criticism to that effect. I agree with that criticism.

If something isn't completely above board, it should be scuttled immediately and quietly before it damages Nivenly's reputation further. Pushing for more discussion and "nuance" sounds like trying to make someone who has refused consent accept it, like tech's eponymous and ever-present pop-up "consent" windows.

See the threads for even more criticism of how this has been handled:

"Focusing on the specific issue at hand"

Nivenly and AI Horde's behavior is shady and work with Haidra and AI Horde should not be pursued further.

Specifically them.

And the way discussion around pulling in this AI project has gone.

Specifically this attempt at interfacing.

And frankly any future attempts would likely be muddied.

More broadly...

And more broadly any AI generation, for several reasons.

  1. it's clear that there's strong persistence from Nivenly and from the devs of Horde that this must go through / already has
  2. no guarantees of addressing ethical concerns of users "code now, ethics later"
  3. not having the process be shady would be good...
  4. doing basic research on how Mastodon would respond to something
  5. thinking like a foundation, not a corporation

security information sharing

What is an ISAC?

An ISAC or Information Sharing and Analysis Center is a way for companies and governments to share threats, attacks, and defense tips.

Normally ISACs and ISAOs are meant for industry verticals.

I haven't seen much of any sort of "hobbyist" or "volunteer" ISAC, meant for people who run homelabs, mastodon servers, etc.

I think one should exist that's specificly providing for the following:

  • Fediverse software
  • formal nonprofits and not-for-profits
  • loose groups of people without nonprofit status
  • Selfhosting

What this would involve:

  • A virtual meeting space
    • something on matrix, bridged to discord?
    • voting via loomio?
  • Documentation
    • best practice hardening
    • best practice SIEMs/SOAR/AV, etc
  • Services
    • automated auditing
    • training
    • automated scanning
    • honeypot networks
    • threat intelligence
    • MISP bidirectional threat feed
  • higher tier services
    • human auditors / bulk pricing / pro-bono?
    • shared SOC?
    • pro-bono IR?

Volunteer types:

  • Client sysadmins!
  • Security documentation writers
  • Security news reporters
  • Threat researchers
  • pro-bono auditors? paid auditors?
  • pro-bono SOC? paid soc?
  • pro-bono IR? paid IR?

For example:

  • port scanning all member boxes
  • IP reporting (via fail2ban) enabled for all members
  • shared rule development
  • shared SIEM resources? (this is probably problematic)

Similar things:

  • IFTAS (fediverse moderator focused) http://about.iftas.org/ - this'd be more than just fedi
  • AbuseIPDB - this includes much more than just reporting bad activity, but responding to web-wide problems, training, hardening, etc
  • Bunkerweb, TheHive, MISP (things we might use, but this ISAC isn't just one technology)

Basically an ISAC spans:

  • multiple kinds of software (not just, say, mastodon)
  • multiple "vendors" (well, in this case, open source security projects)
  • multiple services/roles for/by members

Problems:

  • GDPR, privacy concerns
  • people being wary of security cooperation e.g. with bidirectional threat feeds
  • information sharing might be more limited because vetting might not be the same compared to other ISACs
  • limited time/attention for members

Todo:

  • convey what it is effectively
  • explain how you'd interact with it
  • explain how you get value

What are your thoughts?

message me @risottobias@tech.lgbt!