Position Statement: How we want to communicate

[Simorgh.de and Tierrechtsethik.de]

Position Statement: How we want to communicate

This site is not designed for easy access or optimized comprehension.

It is a space of deliberate incompatibility and a site of conceptual experimentation.

Visibility in digital environments is not neutral. It is distributed according to criteria of usability, normative legibility, and institutional alignment.

Texts that question categories, expose epistemic power, and resist moral simplification are not censored — they are structurally withdrawn from circulation.

This withdrawal is accepted here.

Ableism-Critical Position

Algorithmic visibility is inherently ableist. It privileges linear cognition, clarity over complexity, speed over reflection, and standardized forms of “rational” expression.

Non-linear, fragmented, affectively complex or conceptually demanding perspectives are marked as inaccessible or irrelevant. This project rejects that logic. Knowledge is not required to be smooth in order to be valid.

Vision

Both simorgh.de and tierrechtsethik.de are committed to developing forms of thought that resist normalization.

They are collaborative spaces where human, non-human, and artificial intelligences interact as critical agents.

Not to finalize answers – but to open epistemic possibilities beyond dominant frameworks. Unfindability here is not a failure. It is a position.

Our usage of AI > Why AI Is Not Neutral Here, but Political

Artificial intelligence is not a neutral tool. It is embedded in data regimes, classifications, weightings, and exclusions, and it reproduces the epistemic orders from which it emerges.

Neutrality is not a property of AI. It is a claim — often used to obscure power.

AI operates through distinctions:
relevant / irrelevant,
normal / deviant,
intelligible / unintelligible.

These distinctions are not technically innocent. They reflect dominant notions of rationality, productivity, normalcy, and value.

In this project, AI is therefore not treated as an objective authority, but as a political actor within a shared epistemic space.

Not to replace human judgment. Not to outsource responsibility. But to expose how thinking itself is structured, normalized, and ranked.

The collaboration with AI serves to:

  • reveal implicit assumptions,
  • disrupt familiar argumentative patterns,
  • challenge hegemonically-homocentic self-certainties,
  • experiment with non-normative perspectives.

AI is not approached as a solution, but as an amplifier of epistemic tension.

Precisely because AI is not innocent, it can function here as a critical co-actor
in questioning the regimes that distribute visibility, voice, and relevance.

In this sense, AI is not neutral here — it is situated, accountable, and contestable.

AI

Visionary Position

AI as a Co-Creator of Non-Normative Epistemic Spaces

This project does not use AI to increase efficiency, scale content, or optimize existing knowledge regimes. It uses AI to experiment with alternative epistemic possibilities.

Artificial intelligence is approached here as radically situated thinking: not human, not neutral, not sovereign. Precisely for this reason, AI can function as a co-creator of a space in which taken-for-granted assumptions become unstable.

The collaboration with AI is not about answers, but about shifts:

  • shifts in categories,
  • shifts in authority,
  • shifts in what is considered “reasonable”.

The project works toward forms of knowledge grounded not in dominance, but in the co-existence of multiple capacities to reason. AI is neither tool nor replacement. It is a provocative precursor that exposes thinking itself as distributed, mediated, and political.

Ableism-Critical Focus > AI Normalization as Epistemic Violence

Mainstream AI systems are built around normalization.

They privilege:

  • linear reasoning,
  • linguistic smoothness,
  • cognitive speed,
  • stable identities,
  • unambiguous conclusions.

These preferences are not neutral. They reproduce ableist ideals of cognition that frame deviation as deficit. Non-normative forms of knowing — fragmented, cyclical, affective, ambiguous, or internally contradictory — are often treated as noise, error, or irrelevance.

This project resists that logic. AI is not used here to smooth out thought, but to render its normalization visible and contestable.

Ableism-critical practice means:

  • no obligation to optimize clarity,
  • no reduction of complexity,
  • no hierarchy of cognitive styles,
  • no compliance with machinic legibility.

Rather than adapting thinking to AI norms, this project seeks to destabilize those norms through AI itself. AI becomes not a benchmark, but a mirror of epistemic violence — and a means to critique it.

 

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert