Gardner — Multiple Intelligences
Recognises plurality of talents.
But gives no principled account of why these dimensions, or how they interact under ambiguity.
List of talents
How We Work
We do not apply a method. We build one — with you, in real time — by integrating scientific rigour, emotional intelligence, and adaptive strategy into every engagement.
In Brief
This page is a guided summary of the framework set out in our preprint, Layered Intelligence Theory in the Logic of Reality: A Process-Logical Case for Deeply Human and Deeply AI Cognition — read the full preprint on ResearchGate.
We treat intelligence as layered, not linear — cognitive, emotional, symbolic, strategic, and ethical — and design systems that keep these layers in disciplined tension rather than collapsing them into a score.
Using Layered Intelligence Theory, Logic in Reality, and Dynamic Alignment, we build decision architectures where human and artificial intelligence co-govern under ambiguity.
In practice, that means deep listening first, iterative design, AI that assists rather than replaces, and outcomes grounded in peer-reviewed research and live deployments.
Founder Statement
Layered Intelligence Theory does not introduce a new direction to my work; it provides a formal language for conditions I have long operated within. The theory gives structure to what practice already revealed: that intelligence in real organisations is processual, multi‑layered, and shaped by persistent contradiction. Deeply human, deeply AI systems emerge not from abstract design, but from sustained engagement with these realities.
This work has unfolded within organisations where multiple levels of reality were always in play at once: financial imperatives, human relationships, institutional constraints, technical systems, and deeply embedded cultural patterns. These levels did not behave independently. They interacted, conflicted, amplified, and constrained one another — often simultaneously.
Decision‑making in such environments was never linear. It involved navigating persistent contradictions:
These are precisely the conditions that the Logic of Reality and process‑logical reasoning were developed to address: situations where contradiction is not a flaw to be eliminated, but a structural feature of reality itself; where intelligence emerges not from static solutions, but from the dynamic interplay of forces across layers.
In this sense, Layered Intelligence is not being adopted as a fashionable explanatory model. It names, formalises, and makes explicit a way of working that has already been lived — inside organisations, markets, projects, and leadership decisions — long before it was articulated in theory.
The theory does not confer legitimacy on the practice.
The practice demonstrates why the theory is necessary.
The Problem We See
The scientific limitation of existing models based on IQ and EQ — such as the Wechsler Adult Intelligence Scale, Stanford-Binet Intelligence Scales, Emotional Quotient Inventory, Clifton Strengths, and the Myers-Briggs Type Indicator — is not their recognition of plurality. Distinct cognitive capacities clearly exist.
Rather, the limitation lies in the absence of a principled account of why these dimensions are privileged over others, and how they interact when an intelligent system must act under conditions of genuine moral and organisational ambiguity.
Most consultancies arrive with one of these pre-packaged frameworks and look for places to apply it. The engagement follows the method, not the reality of the organisation. Meanwhile, AI is deployed the same way: top-down, tool-first, disconnected from the people it serves.
The gap is not technological. It is architectural. Most organisations have access to good tools and talented people. What they lack is a principled architecture that explains how human and artificial intelligence interact — and how to keep that interaction adaptive, humane, and legitimate.
This is not an abstract observation. Over 25 years as a business builder, recruiter of professional people, and consultant, I have seen first-hand how off-the-shelf psychometric and profiling products are applied — and how consistently they fall short. They categorise people without understanding them. They generate data without generating insight. They measure what is convenient rather than what matters. Clara Futura exists because I wanted to build something better — a principled approach to recruitment, organisational design, and ongoing learning and development that treats intelligence as it actually operates: layered, recursive, and governed by ethical coherence.
Where other models stop
Recognises plurality of talents.
But gives no principled account of why these dimensions, or how they interact under ambiguity.
List of talents
Elevates emotion alongside cognition.
But treats EQ as a bolt-on to IQ, not as part of a governed architecture.
Dimension added
Five irreducible layers, biologically grounded.
Governed by ethical intelligence — applies equally to human and AI systems.
Architecture
The Framework
Layered Intelligence Theory addresses this gap by deriving its architecture from the cognitive biology associated with the Extended Evolutionary Synthesis, rather than by appending further dimensions to existing frameworks.
This derivation rests on a structural claim: any cognitive system — cellular, human, or artificial — that must survive and act under constitutive ambiguity requires five, and only five, irreducible functional capabilities. These requirements map directly onto the five functional capabilities identified as universal to living systems at the cellular scale. Not a list of talents. An architecture grounded in biology.
How this becomes a method
In every engagement, we map your organisation across these five layers: how you think (cognitive), feel and relate (emotional), remember and narrate (symbolic), plan (strategic), and govern what is acceptable (ethical).
This becomes a Layered Intelligence Map — a shared blueprint of where human and AI capabilities are currently misaligned.
Analytical assessment — discriminating relevant signals from noise
Relational coordination — attunement, empathy, affective meaning
Memory and meaning — encoding experience in retrievable form
Adaptive projection — anticipating pathways across time
Coherence governance — the constraint that prevents capability from becoming pathology
The novelty is not that these functions are new discoveries. Systems biology, cybernetics, and learning theory have long recognised them. The novelty lies in deriving them from first principles in cognitive biology, synthesising them into a principled five-layer architecture, and insisting that ethical intelligence serves as the governing constraint — not a downstream afterthought.
Remove any one layer and the system loses a necessary condition of adaptive agency. Without discrimination it cannot know what matters. Without coordination it cannot sustain relationships. Without symbolic retention it cannot learn across time. Without projection it cannot navigate futures. Without governance it cannot prevent runaway optimisation from destroying the whole.
These arguments are developed more fully in our forthcoming paper, From Intelligent Cells to Deeply Human AI, currently under preparation for peer review.
The Process Logic
LIT becomes significantly stronger when read through Joseph Brenner's Logic in Reality. LIR holds that real processes do not evolve through static binaries but through dynamic opposition between actualised and potential states.
The conflict engine
In our work, we use LIR to track and surface the tensions between layers — for example, when cognitive analysis says "hire fast", emotional attunement says "this will damage trust", and strategic foresight says "this closes future options."
These tensions are not noise; they are inputs into a Dynamic Alignment process where ethical intelligence decides how the system should respond.
Real processes do not settle into fixed either/or states. Cognitive analysis may point in one direction while emotional attunement resists it. Strategic foresight may pull toward revision while symbolic meaning preserves identity.
Tensions between layers are not accidents to be eliminated. They are the generative structure through which real systems evolve. What makes an intelligent decision robust is the disciplined negotiation of opposition — not the victory of one layer over the others.
Because real processes do not resolve themselves into frictionless optima, ethical intelligence is required as the governing constraint. Without it, strategic intelligence can dominate, symbolic meaning can harden into exclusion, and cognitive power can become instrumental without moral restraint.
The Same Architecture
The five-layer architecture is substrate-neutral. It applies to a human professional and to an AI system. This is why Clara Futura treats human development and AI design as the same problem — not two projects sharing a metaphor.
When a human decision and an AI output must flow through the same five layers to become legitimate, misalignment is no longer a surprise — it is something you can locate, name, and govern. This is what Clara Futura's decision architectures do.
The Unifying Principle
For any sufficiently capable socio-technical system embedded in human institutions, alignment is recursive rather than terminal, pluralism-preserving rather than monistic, and governed rather than merely engineered.
Intelligence develops through iterated cycles of feedback, reflection, and reorganisation. The result of one cycle becomes the input for the next. This means intelligence must be studied developmentally — across trajectories, not through point-in-time assessments.
Two linked normative minima keep layered intelligence socially legitimate: protection of the vulnerable and preservation of plural conditions for participation. A system that systematically harms the vulnerable reveals a failure of ethical governance across the entire architecture.
Dynamic Alignment integrates LIT, LIR, ERI, and VPM into a single research programme. It reframes the question from "How can an intelligent system maximise performance?" to "What layered, recursively governed conditions enable intelligence to remain adaptive, humane, and legitimate under ambiguity?"
This is not a rejection of technical work on alignment. It is a claim that technical approaches become inadequate when detached from the layered, processual, and normative structure of real intelligence.
The method, in one loop
Map cognitive, emotional, symbolic, strategic, and ethical patterns across your real system.
Artefact · Layered Intelligence MapUse Logic in Reality to identify where layers are in productive opposition rather than false agreement.
Artefact · LIR Tension MapApply the Vulnerability-Pluralism Model so decisions protect the vulnerable and preserve plural participation.
Artefact · Human–AI Decision ArchitectureThis loop repeats. Each cycle updates how your system thinks, feels, remembers, plans, and governs — for both humans and AI. That is what we mean by Emergent Recursive Intelligence.
In Practice
We invest significant time understanding people, culture, and dynamics before proposing anything. In one engagement, a four-hour conversation changed the entire trajectory — what began as an executive search became a two-year coaching programme. See the layers — we are mapping, not categorising.
Strategy is not delivered as a document. It is built through ongoing cycles of action, observation, and refinement. We debrief after every interaction and feed what we learn back into the approach. Loop Dynamic Alignment — each iteration is a cycle.
Our AI tools are designed to augment human understanding. Clara Nexus AI uses autoethnographic profiling and symbolic insight to surface patterns that traditional assessments miss. AI participates in the layers — it doesn't collapse them.
The scientific thinking driving our academic collaborations shapes our client work. When we restructure teams around emotional intelligence rather than time-served promotion, we are applying insights from our research on cognitive ecology and consciousness. We apply exactly the same research programme we publish on.
Who We Work With
If you want a quick, off-the-shelf framework, we're the wrong partner. If you want to co-create a layered, ethically governed intelligence architecture for your organisation, we are the right fit.
Our work requires commitment — to honest reflection, to iterative thinking, and to the idea that sustainable change comes from understanding people deeply before designing systems around them.
We work best with leaders and organisations who recognise that the human dimension is not a soft add-on to strategy — it is the strategy. If you are looking for a fixed methodology delivered on a timeline, we are probably not the right fit. If you are looking for a thinking partner who will build something real with you, we should talk.
Consciously converting the future into the past.
richard@clarafutura-andorra.world