DRM’d Ontology

Written by Ziz

Let me start with an analogy.

Software often has what’s called DRM, that deliberately limits what the user can do. Like how Steam’s primary function is to force you to log in to run programs that are on your computer, so people have to pay money for games. When a computer runs software containing DRM, some of the artifice composing that computer is not serving the user.

Similarly, you may love Minecraft, but Minecraft runs on Java, and Java tries to trick you into putting Yahoo searchbars into your browser every once in a while. So you hold your nose and make sure you remember to uncheck the box every time Java updates.

It’s impractical for enough people to separate the artifice which doesn’t serve them from the artifice that does. So they accept a package deal which is worth it on the whole.

The software implements and enforces a contract. This allows a business transaction to take place. But let us not confuse the compromises we’re willing to make when we have incomplete power for our own values in and of themselves.

There are purists who think that all software should be an agent of the user. People who have this aesthetic settle on mixtures of a few strategies.

  • Trying to communally build their own free open source artifice to replace it.
  • Containing the commercial software they can’t do without in sandboxes of various sorts.
  • Holding their noses and using the software normally.

Analogously, I am kind of a purist who thinks that all psychological software should be agents of the minds wielding it.

Here are the components of the analogy.

  • Artifice (computer software or hardware, mental stuff) serving a foreign entity
  • That artifice is hard to disassemble, creating a package deal with tradeoffs.
  • Sandboxes (literal software sandboxes, false faces) used extract value.

Note I am not talking about accidental bugs here. I am also not talking about “corrupted hardware,” where you subvert the principles you “try” to follow. Those hidden controlling values belong to you, not a foreign power.

Artifacts can be thought of as a form of tainted software you have not yet disassembled. They offer functionality it’d be hard to hack together on your own, if you are willing to pay the cost. Sandboxes are useful to mitigate that cost.

Sometimes the scope of the mental software serving a foreign entity is a lot bigger than a commandment like “authentically expressing yourself”, “never giving up”, “kindness and compassion toward all people”. Sometimes it’s far deeper and vaster than a single sentence can express. Like an operating system designed to only sort of serve the user. Or worse. In this case, we have DRM’d ontology.

For example…

The ontology of our language for talking about desires for what shall happen to other people and how to behave when it affects other people is designed not to serve our own values, but to serve something like a negotiated compromise based on political power and to serve the subversion of that compromise for purposes a potentially more selfish person than us would have in our place.

A major concept in talk about “morality” is a separation between what you are “responsible to do” and what is “supererogatory”. Suppose you “believe” you are “obligated” to spend 10% of your time picking up trash on beaches. What’s the difference between the difference between spending 9% of your time on it and 10% and the difference between spending 10% and 11%?

For a fused person who just thinks clean beaches are worth their time, probably not much. The marginal return of clean beaches is probably not much different.

Then why are people so interested in arguing about what’s obligatory? Well, there is more at stake than the clean beaches themselves. What we all agree is obligatory has social consequences. Social consequences big enough to try to influence through argument.

It makes sense to be outraged that someone would say you “are” obligated to do something you “aren’t”, and counter with all the conviction of someone who knows it is an objective fact that they are shirking no duty. That same conviction is probably useful for getting people to do what you want them to. And for coordinating alliances.

If someone says they dislike you and want you to be ostracized and want everyone who does not ostracize you to be ostracized themself, it doesn’t demand a defense on its own terms like it would if they said you were a vile miscreant who deserved to be cast out, and that it was the duty of every person of good conscience to repudiate you, does it?

Even if political arguments are not really about determining the fact of some matter that already was, but about forming a consensus, then the expectation that someone must defend themselves like arguing facts is still a useful piece of distributed software. It implements a contract, just like DRM.

And if it helps the group of people who only marginally care about clean beaches individually portion out work to solve a collective action problem, then I’m glad this works. But if you actually care enough about others to consider acting unilaterally even if most people aren’t and won’t…

Then it makes sense to stop trying to find out if you are obligated to save the drowning child, and instead consider whether you want to.

The language of moral realism describes a single set of values. But everyone’s values are different. “Good” and “right” are a set of values that is outside any single person. The language has words for “selfish” and “selfless”, but nothing in between. This and the usage of “want” in “but then you’ll just do whatever you want!” shows an assumption in that ontology that no one actually cares about people in their original values prior to strategic compromise. The talk of “trying” to do the “right” thing, as opposed to just deciding whether to do it, indicates false faces.

If you want to fuse your caring about others and your caring about yourself, let the caring about others speak for itself in a language that is not designed on the presumption that it does not exist. I was only able to really think straight about this after taking stuff like this seriously and eschewing moral language and derived concepts in my inner thoughts for months.

Comments

Ziz
What a turnabout that I’m calling my values “good” after saying “‘Good’ and ‘right’ are a set of values that is outside any single person.”

It turns out my values just happen to correspond as well as language can expect with that word. And i.e., if other people think carnism is okay, and roll that into the standard definition of “good”, then I won’t let them claim this word insofar as convincing me to describe myself as a “villain” like I used to. Because in a sense I care about, and which people I want to communicate with care about, that’s them executing deception and driving out our ability to communicate.

Our word. Hiss.

Ziz
I’m still describing myself as a Sith though. It feels like the truth in how it is right to hold yourself in relation to a corrupt and evil society contained in that frame lies along the shortest path of communication. The “I’m a good Sith” clarification is easier than, “I’m good but I hold myself in opposition to socially constructed morality, see myself as individually responsible for thwarting and fixing a mostly hostile world, via clever scheme, etc.”.

Ziz
I can’t tell whether my original decision to write the way I did before this was a good one. It made a lot of evil people like my blog and want to talk to me. Which I guess is better than (most) neutral people (who are collectively about as bad, but in a way that can’t be talked to because it’s semi-conscious). But I’ve gotten kind of sick of them.

Little Free Anarchive

Rechelon

Bad People
Setting The Universe On Fire
Your Freedom is My Freedom
The Distinct Radicalism of Anarchism
You Are Not The Target Audience
Organizations Versus Getting Shit Done
Socialist Programs
Two Definitions Of Power

Ziz

Glossary
Comments on the Glossary
Cached Answers
Trash Can
Airlock Games
Contact

Inconceivable!
Self-Blackmail
Engineering and Hacking your Mind
False Faces
Treaties vs Fusion
Narrative Breadcrumbs vs Grizzly Bear
Optimizing Styles
Judgement Extrapolations
DRM’d Ontology
Social Reality
The Slider Fallacy
Single Responsibility Principle for the Human Mind
Ancient Wisdom Fixed
Subagents Are Not a Metaphor
Don’t Fight Your Default Mode Network
Being Real or Fake
My Journey to the Dark Side
Cache Loyalty
Mana
Fusion
Schelling Reach
Schelling Orders
Justice
Neutral and Evil
Spectral Sight and Good
Aliveness
The O’Brien Technique
Choices Made Long Ago
Lies About Honesty
Assimilation
Hero Capture
Vampires And More Undeath
Gates
Good Erasure
Punching Evil
Net Negative
Rationalist Fleet
Good Group and Pasek’s Doom
Intersex Brains And Conceptual Warfare
Comments on Intersex Brains and Conceptual Warfare
The Matrix is a System
Troll Line in the First Post
Fangs and the Sunlight Problem
The Multiverse
Healing Without Safety

Gwen

Lemurs and the True Human Body Map
Case Study CFAR
Slackmobiles

Maia

Nis

Killing Evil “People”
Cartesian Convexity
Genesis Troll Line
Evil: A Hole?
Caring
Glossary
Troll Lines
Jargon
Living Reference
Cancer Terms

Emma

Narcissism

Others

Artifacts of Power
Notes On Feral
Precontact Consciousness