diff --git a/.gitignore b/.gitignore
index 6bd25a8..9eed735 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,8 @@
-# temp files
+# temporary files:
~$*
-.asciidoctor/
\ No newline at end of file
+.asciidoctor/
+
+# draw.io backup and temporary files:
+*.drawio.bkp
+*.drawio.dtmp
+
diff --git a/documentation/correctness/diagrams/criteria.drawio b/documentation/correctness/diagrams/criteria.drawio
new file mode 100644
index 0000000..0f85ee8
--- /dev/null
+++ b/documentation/correctness/diagrams/criteria.drawio
@@ -0,0 +1,87 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/documentation/correctness/diagrams/criteria.png b/documentation/correctness/diagrams/criteria.png
new file mode 100644
index 0000000..f50e72a
Binary files /dev/null and b/documentation/correctness/diagrams/criteria.png differ
diff --git a/documentation/correctness/diagrams/criteria.svg b/documentation/correctness/diagrams/criteria.svg
new file mode 100644
index 0000000..55091fa
--- /dev/null
+++ b/documentation/correctness/diagrams/criteria.svg
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/documentation/correctness/diagrams/levels.drawio b/documentation/correctness/diagrams/levels.drawio
new file mode 100644
index 0000000..e262187
--- /dev/null
+++ b/documentation/correctness/diagrams/levels.drawio
@@ -0,0 +1,116 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/documentation/correctness/diagrams/levels.png b/documentation/correctness/diagrams/levels.png
new file mode 100644
index 0000000..47b26d3
Binary files /dev/null and b/documentation/correctness/diagrams/levels.png differ
diff --git a/documentation/correctness/diagrams/levels.svg b/documentation/correctness/diagrams/levels.svg
new file mode 100644
index 0000000..be96151
--- /dev/null
+++ b/documentation/correctness/diagrams/levels.svg
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/documentation/correctness/findings.adoc b/documentation/correctness/findings.adoc
new file mode 100644
index 0000000..f064519
--- /dev/null
+++ b/documentation/correctness/findings.adoc
@@ -0,0 +1,96 @@
+= Findings
+
+It makes sense to phrase the answer as a collection of _findings_, instead of giving a hard "`yes/no`" answer.
+A **finding** is a specific point in a serialization chunk where a specific kind of incorrectness arises.
+
+Many (kinds of) findings allow a meaningful (full or partial) _recovery_ — meaning: they can be recovered from.
+Recovery is often obvious enough that it can be performed programmatically — meaning: automatically.
+As an example: an illegal trailing comma in a JSON array should be reported but can be recovered from, simply by ignoring/skipping over it.
+
+An unresolved reference can be left in as-is, and treated as an unknown (but not a _absent_) value.
+It's then up to consumers of serialization chunks to give a special meaning — i.e., domain-specific semantics — to that particular value.
+In some cases, that might not be possible or desirable to do, e.g.: most interpreters and code generators can only work with a completely correct, and complete model.
+That means they might have to combine —or "`stitch together`"— multiple serialization chunks to produce a complete model.
+
+It's entirely up to consumers of serialization chunks to validate its correctness, and to decide what to do with any resulting findings.
+Especially findings that don't permit programmatic recovery, should be reported very clearly to the domain/subject matter expert, in a way that explains what they can do to repair/prevent is.
+A serialization chunk must be editable/mutable (by a suitable client, after deserialization) even if it has findings.
+
+Consumer of serialization chunks could implement recovery for kinds of findings that permit programmatic recovery as procedures that modify the serialization chunk.
+
+
+== Details to be reported in a finding
+
+A reported finding should contain at least the following pieces of information:
+
+* An identification of the _kind_ of the finding.
++
+The kind of the finding should also determine its _severity_.
+This **severity** is an indication of the recoverability of the finding — see <<_severities,below>>.
+We think it's reasonable to assume that every instance of a particular kind of finding has the same severity.
+
+* A _location_: where the finding occurred — see <<_location,below>>.
+
+* A user-readable message explaining what the finding is, and how it should be remedied.
+Ideally, this message can be computed from other information in the finding, specifically its kind and location.
+
+A reported finding could contain more information, depending on its kind.
+
+
+=== Location
+
+A finding is useless if it doesn't tell exactly what its location is.
+A number of basic ways of identifying a location exist:
+
+[horizontal]
+
+Text-based::
+A specific location in a text could be given in any of the following forms:
++
+* A pair of integers (line, column), with the text starting at (1, 1).
+* A character index (considering the text as a character stream), starting at 1.
+* A _range_ of text given as a (pair consisting of a) character location and an integer length.
++
+The first two forms are _character locations_ which are interchangeable, although the first form is inherently more user-friendly.
+A range provide more information to work with, except for when the length is 1 or so large a number to be meaningful — e.g., effectively meaning "`the rest of the text`".
++
+For findings arising on the <>, these are the only available ways to identify a location.
+Unfortunately, not all JSON parsers are able to report problems in this way.
+
+JSON-based::
+JsonPathfootnote:[https://goessner.net/articles/JsonPath/] is a method to point to specific elements in JSON text in a precise way that's independent of textual location.
+This way is useful from the <> onwards.
+
+(Sub-)Node-based::
+Beyond the <>, it becomes possible to address specific nodes by their ID.
+In addition, one can point to a specific feature (by key) and to a specific value of a multi-valued feature by index.
+All in all: ([, [, ]]).
+
+Location-forms can be thought to "`cascade`": it's convenient to augment a node-based location with a JSON-based location, and likewise a JSON-based location with a text-based one.
+
+
+=== Severities
+
+They are:
+
+[horizontal]
+
+Fatal::
+ It's not possible to recover from the finding.
+ E.g., a byte stream that's supposed to be a JSON file is empty, or not even remotely recognizable as a LionWeb-compliant serialization chunk.
+
+Stubbable::
+ It's possible to recover from the finding by providing a stub.
+ E.g., an unresolved reference could be deserialized in the form of a "`stubbing`" proxy node for the target of that reference.
+ This proxy node at least preserves the referred-to ID, and potentially `resolveInfo`, or a suitable (abstract) concept.
+ Such a proxy node can later on be replaced by the actual node that is the reference's target, e.g. when an additional serialization chunk containing that target gets deserialized (in the same context).
+
+Requires intervention::
+ Intervention —presumably a manual one by a human, but some kind of ML/AI could also be employed— is required to recover from this finding.
+ This is in particular the case for missing information, e.g.: required features lacking a value setting in a node.
+
+Auto-recoverable::
+ This is "`the best`" severity, as it's recovery is programmatic/automatic.
+ The typical example is the trailing comma in a JSON array—(we might think of that as "`ignorable`".)
+ Another class of examples is formed by incorrectness that's solved by performing a _migration_.
+
diff --git a/documentation/correctness/levels.adoc b/documentation/correctness/levels.adoc
new file mode 100644
index 0000000..9305b57
--- /dev/null
+++ b/documentation/correctness/levels.adoc
@@ -0,0 +1,113 @@
+= Correctness levels
+
+LionWeb provides a notion for correctness of serialization chunks in terms of multiple _levels_.
+Each level groups a set of kinds of incorrectness, and whether and how occurrences of these kinds of findings can be meaningfully recovered from.
+Additionally, we explain how findings should be reported.
+This approach provides a degree of resilience: findings arising in one level don't necessarily prohibit determining whether a serialization chunk exhibits incorrectness on another level, provided a finding can be meaningfully recovered from.
+
+The first three levels pertain purely to the JSON that's supposed to be a serialization chunk, and are language(s)-agnostic — essentially, this is about whether the JSON is _well-formed_ as a serialization chunk in JSON format.
+The remaining two levels require language-awareness, but could still take place entirely within the serialization chunk, although for the final, fifth level —constraints— this is usually cumbersome and better performed on a programmatic representation of the model.
+
+We admit some flexibility on the first three, JSON-centric levels.
+That is because the behavior and adaptability of JSON parsers differ from implementation to implementation.
+E.g., some parsers might simply ignore/skip over trailing commas, or have the last of key-value pairs with duplicate keys "`win`".
+We simply can't (and don't want to demand) that a LionWeb implementation always produces the exact same findings, especially if they are auto-recoverable.
+
+Serialization chunks are by definition allowed to be incomplete.
+It's not important whether references resolve _within_ the serialization chunk: an unresolvable reference doesn't affect interoperability/interchangeability.
+For this reason, unresolvable references are regarded as an incorrectness purely _within_ the serialization chunk, but one with _stubbable_ severity.
+
+We give an overview of the various levels, and what each pertains to.
+At first, we do this without specifying the corresponding kinds of findings that may arise in that level.
+The list is roughly in order from low-level and language-agnostic, to language(s)-specific.
+
+[horizontal]
+
+JSON::
+Pertains to the text that's (supposed to be) the JSON serialization chunk.
+This is the lowest syntactic level.
+
+Structural::
+Pertains to the syntactic structure of the (succesfully) parsed JSON text.
+
+Hierarchical::
+Pertains to relational constraints within the parsed JSON text.
+
+Meta-structural::
+This means that the serialization chunk conforms to the language(s) it declares to be an instance of.
+
+Referential::
+This means that references can be resolved.
+
+Constraints::
+This pertains to a semantic, language(s)-specific notion of correctness.
+
+[NOTE]
+====
+For all levels except for the constraints, it's possible to identify a fully-specified set of kinds of findings that may arise at each of them, based purely on the LionWeb specification for the JSON serialization format.
+To keep this document readable, we first discuss all the levels, and describe that set of kinds of findings elsewhere.
+In particular, that description will be derived from the implementation of the `@lionweb/validation` package.
+====
+
+We elaborate on these levels now, below.
+
+== JSON (low-level syntactic)
+
+This boils down to whether the text (or byte stream) that's supposed to be a serialization chunk, is a valid JSON document.
+findings arising on this level typically have severity fatal, or auto-recoverable by means of ignoring the problem.
+However, ignoring the problem might also hide information by skipping over unparseable content that could be made parseable through intervention.
+As already mentioned: whether it's possible to even generate an finding depends completely on the JSON parser used.
+
+== Structural syntactic
+
+In essence, this level is equivalent to conforming to a generic, language-agnostic JSON Schema —or some other formalism to describe the syntactic structure of the JSON— for the serialization format.
+The expressive power of JSON Schema is relatively weak, so we also need the hierarchical level, but at least there's good support for JSON Schema across platforms/programming languages that allows this to be implemented fairly quickly.
+
+== Hierarchical
+
+Findings with the JSON that can't be caught at the structural syntactic level should be caught at this level.
+Examples of hierarchical findings are:
+
+* Multiple nodes with the same ID occur.
+* A node that declares a certain parent node must be contained as a child of that parent node — provided that parent node is present in the serialization chunk; if it's not, then we can't check that constraint.
+
+== Meta-structural
+
+This level requires knowledge of the languages that a serialization chunk declares to conform to.
+Examples of meta-structural findings are:
+
+* Meta-pointers can't be resolved (within the languages that the serialization chunk declares to conform to).
+* A node must declare values for the features of the classifier it declares.
+
+== Referential
+
+The single finding arising at this level (with stubabble severity) is that the target of a `Reference` feature can't be found, either in the serialization chunk itself or in nodes provided from elsewhere.
+
+== Constraints
+
+We choose to pragmatically treat the constraints level as a special correctness level, as constraint violations don't affect interoperability/interchangeability directly.
+
+A violation of a constraint would typically lead to a direct failure in the model's semantics — i.e.: its execution through interpretation, or generating code and running that —, or to the result of the semantics not making sense in the (context of the) domain, or both.
+It's up to the language(s) designers to make that distinction (whenever it exists) clear to the language's users.
+
+We give an example of a language-specific constraint.
+Consider a language with core concepts _tables_, _columns_ within those, and SQL-like _queries_.
+These queries _reference_ columns within tables, e.g., in the form: `
.`.
+A constraint for any reference would be: "`in a reference to a column of a referenced table, the column referenced must be a column of the referenced table`".
+
+Another example would be that names should appear uniquely within bounded contexts.
+
+In our experience, a significant part of constraints is "`type-informed`" which means that _type computation/derivation_ is typically an intrinsic part of the constraints aspect of any language.
+
+This level is probably most conveniently phrased in terms of a programmatic representation, but for every programmatic representation there's an equivalent formulation purely in terms of the serialization format.
+This works just as well, and maybe even better, because semantics can be more generically "`patched`" w.r.t. non-resolving references.
+However, the constraints are not necessarily specified in a form that's interpretable in terms of a serialization chunk, and agnostic to any particular programmatic representation.
+That might be enough of an obstacle to compute constraint violations only on the programmatic level, and disregarding it completely on the level of serialization chunks.
+
+[NOTE]
+====
+LionWeb uses explicit, ID-based references, which means that scoping is not needed to resolve references.
+Nevertheless, scoping probably still plays some role in any language.
+The constraint stated above can be interpreted as a scoping rule, and the language's UI should take it into account when providing content assist to the language's users.
+====
+
diff --git a/documentation/correctness/overview.adoc b/documentation/correctness/overview.adoc
new file mode 100644
index 0000000..927ee39
--- /dev/null
+++ b/documentation/correctness/overview.adoc
@@ -0,0 +1,46 @@
+= Serialization chunk correctness
+
+Whether a model is _correct_ —or _valid_, or _consistent_— is an important question.
+
+[IMPORTANT]
+====
+We'll talk about _correctness_ from now, and consider _validity_ and _consistency_ to be equivalent concepts.
+====
+
+Correctness ultimately determines whether a model is useful in the context it exists and is used in.
+Answering that question only with a simple "`yes`" or "`no`" is too simplistic: a model can be incorrect in different ways, and on different levels.
+Having an illegal trailing comma in an array somewhere in the JSON serialization of a model —e.g., `[ 0, 1, 2**,** ]`— should not prevent that serialization chunk from being consumablefootnote:[I.e.: processable, interchangeable, deserializable, etc.].
+Not being able to resolve a reference is a different kind of failure than violating a semantic constraint.
+
+With this document, we aim to provide a useful answer to a narrower question: **When is a _serialization chunk_ correct, and to what extent?**
+We focus purely on serialization chunks, and do not consider programmatic representations of models, e.g. as a result of deserializing serialization chunks.
+(LionWeb intentionally doesn't prescribe anything about the _programmatic representation_ of modelsfootnote:[I.e.: the runtime or in-memory representation that's the result of deserializing a serialization chunk of a model], leaving implementors of language-oriented tooling free to choose the representation that's right for them.)
+This limits the scope to the LionWeb specification without spilling over into areas that the specification doesn't target (or doesn't target _yet_).
+
+We have the following reasons for that:
+
+* A model is —in principle— faithfully representable as a set of serialization chunks.
+In other words: a post-deserialization, programmatic representation should not add (non-derivable), nor loose information.
+Symmetrically: serialization should not loose or add information either.
+
+* Programmatic representation of a model (post-deserialization) is quite context-specific, and is typically tied to the particular platform/language usedfootnote:[I.e.: JVM with Java, Kotlin, Java-/TypeScript, etc.].
+This means that the concrete API of the programmatic representation can vary quite a bit, making it more difficult to answer the question in a uniform way.
+
+* The LionWeb serialization JSON format is the cornerstone of the LionWeb specification.
+
+Note that it's entirely up to _clients_ to assert/validate the correctness of a serialization chunk, and to decide what to do with any incorrectness detected.
+(A **client** is a software component accepting serialization chunks consuming serialization chunks for processing—transformation, modification—, persistence, etc.)
+
+The rest of this document is split up in the following sections:
+
+[horizontal]
+<>:: How clients (should) report incorrectness, in the form of _findings_.
+<>:: How the various kinds of findings are organized.
+
+[NOTE]
+====
+This document is _not_ a specification.
+Clients can choose to recognize certain kinds of incorrectness, while not recognizing others.
+Clients can also choose whether to implement recovery (see the sections) for any kind of incorrect.
+====
+