Page is not found.
+Try searching in the top bar, the document you're looking for may have been moved!
+ +diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml deleted file mode 100644 index f28b1da..0000000 --- a/.github/workflows/deploy.yml +++ /dev/null @@ -1,30 +0,0 @@ -name: github pages - -permissions: write-all - -on: - push: - branches: - - main - -jobs: - deploy: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v2 - - - name: Setup mdBook - uses: peaceiris/actions-mdbook@v1 - with: - mdbook-version: 'latest' - - - run: cargo install mdbook-open-on-gh - - - run: mdbook build - - - name: Deploy - uses: peaceiris/actions-gh-pages@v3 - with: - github_token: ${{ secrets.GITHUB_TOKEN }} - publish_dir: ./book/html - cname: docs.atomicdata.dev diff --git a/.gitignore b/.gitignore deleted file mode 100644 index 41d4f2a..0000000 --- a/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -/book -.DS_Store diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..f173110 --- /dev/null +++ b/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/.vscode/tasks.json b/.vscode/tasks.json deleted file mode 100644 index d57ebc6..0000000 --- a/.vscode/tasks.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "version": "2.0.0", - "tasks": [ - { - "type": "shell", - "label": "docs atomic data (mdbook serve)", - "command": "mdbook serve", - } - ] -} diff --git a/404.html b/404.html new file mode 100644 index 0000000..6bbb480 --- /dev/null +++ b/404.html @@ -0,0 +1,209 @@ + + +
+ + +Try searching in the top bar, the document you're looking for may have been moved!
+ +Atomic Agents are used for authentication: to set an identity and prove who an actor actually is. +Agents can represent both actual individuals, or machines that interact with data. +Agents are the entities that can get write / read rights. +Agents are used to sign Requests and Commits and to accept Invites.
+url: https://atomicdata.dev/classes/Agent
+An Agent is a Resource with its own URL.
+When it is created, the one creating the Agent will generate a cryptographic (Ed25519) keypair.
+It is required to include the publicKey in the Agent resource.
+The privateKey should be kept secret, and should be safely stored by the creator.
+For convenience, a secret can be generated, which is a single long string of characters that encodes both the privateKey and the subject of the Agent.
+This secret can be used to instantly, easily log in using a single string.
The publicKey is used to verify commit signatures by that Agent, to check if that Agent actually did create and sign that Commit.
Since an Agent is used for verification of commits, the Agent's subject should resolve and be publicly available.
+This means that the one creating the Agent has to deal with this.
+One way of doing this, is by hosting an Atomic Server.
+An easier way of doing this, is by accepting an Invite that exists on someone else's server.
Atomic Data is a modular specification for sharing, modifying and modeling graph data. It combines the ease of use of JSON, the connectivity of RDF (linked data) and the reliability of type-safety.
+Atomic Data uses links to connect pieces of data, and therefore makes it easier to connect datasets to each other - even when these datasets exist on separate machines.
+Atomic Data has been designed with the following goals in mind:
+Atomic Data is Linked Data, as it is a strict subset of RDF.
+It is type-safe (you know if something is a string, number, date, URL, etc.) and extensible through Atomic Schema, which means that you can re-use or define your own Classes, Properties and Datatypes.
The default serialization format for Atomic Data is JSON-AD, which is simply JSON where each key is a URL of an Atomic Property.
+These Properties are responsible for setting the datatype (to ensure type-safety) and setting shortnames (which help to keep names short, for example in JSON serialization) and descriptions (which provide semantic explanations of what a property should be used for).
Read more about Atomic Data Core
+Atomic Data Extended is a set of extra modules (on top of Atomic Data Core) that deal with data that changes over time, authentication, and authorization.
+Atomic Data has been designed to be very easy to create and host. +In the Atomizing section, we'll show you how you can create Atomic Data in three ways:
+docker run -p 80:80 -v atomic-storage:/atomic-storage joepmeneer/atomic-server)cargo install atomic-cli)Make sure to join our Discord if you'd like to discuss Atomic Data with others.
+Keep in mind that none of the Atomic Data projects has reached a v1, which means that breaking changes can happen.
+This is written mostly as a book, so reading it in the order of the Table of Contents will probably give you the best experience. +That being said, feel free to jump around - links are often used to refer to earlier discussed concepts. +If you encounter any issues while reading, please leave an issue on Github. +Use the arrows on the side / bottom to go to the next page.
+Here is everything you need to get started:
+Atomic-Server is the reference implementation of the Atomic Data Core + Extended specification.
+It was developed parallel to this specification, and it served as a testing ground for various ideas (some of which didn't work, and some of which ended up in the spec).
Atomic-Server is a graph database server for storing and sharing typed linked data. +It's free, open source (MIT license), and has a ton of features:
+wss request to /ws to open a webscocket.In this guide, we'll can simply use atomicdata.dev in our browser without installing anything.
+So you can skip this step and go to Creating your first Atomic Data.
+But if you want to, you can run Atomic-Server on your machine in a couple of ways:
releases page and install it using your desktop GUI.releases page and open it using a terminal.docker run -p 80:80 -v atomic-storage:/atomic-storage joepmeneer/atomic-server.cargo install atomic-server and then run atomic-server to start.Atomic-Server's README contains more (and up-to-date) information about how to use it!
+Open your server in your browser.
+By default, that's http://localhost:9883.
+Fun fact: ⚛ is HTML entity code for the Atom icon: ⚛.
The first screen should show you your Drive. +You can think of this as your root folder. +It is the resource hosted at the root URL, effectively being the home page of your server.
+There's an instruction on the screen about the /setup page.
+Click this, and you'll get a screen showing an Invite.
+Normally, you could Accept as new user, but since you're running on localhost, you won't be able to use the newly created Agent on non-local Atomic-Servers.
+Therefore, it may be best to create an Agent on some other running server, such as the demo Invite on AtomicData.dev.
+And after that, copy the Secret from the User settings panel from AtomicData.dev, go back to your localhost version, and press sign in.
+Paste the Secret, and voila! You're signed in.
Now, again go to /setup. This time, you can Accept as {user}.
+After clicking, your Agent has gotten write rights for the Drive!
+You can verify this by hovering over the description field, clicking the edit icon, and making a few changes.
+You can also press the menu button (three dots, top left) and press Data view to see your agent after the write field.
+Note that you can now edit every field.
+You can also fetch your data now as various formats.
Try checking out the other features in the menu bar, and check out the collections.
Again, check out the README for more information and guides!
+Now, let's create some data.
+Before you can create new things on AtomicData.dev, you'll need an Agent. +This is your virtual User, which can create, sign and own things.
+Simply open the demo invite and press accept. +And you're done!
+Now let's create a Class.
+A Class represents an abstract concept, such as a BlogPost (which we'll do here).
+We can do this in a couple of ways:
+ icon button on the left menu (only visible when logged in), and selecting Classnew classThe result is the same: we end up with a form in which we can fill in some details.
+Let's add a shortname (singular), and then a description.
+After that, we'll add the required properties.
+This form you're looking at is constructed by using the required and recommended Properties defined in Class.
+We can use these same fields to generate our BlogPost resource!
+Which fields would be required in a BlogPost?
+A name, and a description, probably.
So click on the + icon under requires and search for these Properties to add them.
Now, we can skip the recommended properties, and get right to saving our newly created BlogPost class.
+So, press save, and now look at what you created.
Notice a couple of things:
+parent, shown in the top of the screen. This has impact on the visibility and rights of your Resource. We'll get to that later in the documentation.Now, go to the navigation bar, which is by default at the bottom of the window. Use its context menu to open the Data View.
+This view gives you some more insight into your newly created data, and various ways in which you can serialize it.
This was just a very brief introduction to Atomic Server, and its features. +There's quite a bit that we didn't dive in to, such as versioning, file uploads, the collaborative document editor and more... +But by clicking around you're likely to discover these features for yourself.
+In the next page, we'll dive into how you can create an publish JSON-AD files.
+ +Now that we're familiar with the basics of Atomic Data Core and its Schema, it's time to create some Atomic Data! +We call the process of turning data into Atomic Data Atomizing. +During this process, we upgrade the data quality. +Our information becomes more valuable. +Let's summarize what the advantages are:
+In general, there are three ways to create Atomic Data:
+Authentication means knowing who is doing something, either getting access or creating some new data. +When an Agent wants to edit a resource, they have to send a signed Commit, and the signatures are checked in order to authorize a Commit.
+But how do we deal with reading data, how do we know who is trying to get access? +There are two ways users can authenticate themselves:
+Authentication Resource and using that as a cookieAuthentication Resource.An Authentication Resource is a JSON-AD object containing all the information a Server needs to make sure a valid Agent requests a session at some point in time. +These are used both in Cookie-based auth, as well as in WebSockets
+We use the following fields (be sure to use the full URLs in the resource, see the example below):
+requestedSubject: The URL of the requested resource.
+wss address as the requestedSubject. (e.g. wss://example.com/ws)https://example.com)GET address (e.g. https://example.com/myResource)agent: The URL of the Agent requesting the subject and signing this Authentication Resource.publicKey: base64 serialized ED25519 public key of the agent.signature: base64 serialized ED25519 signature of the following string: {requestedSubject} {timestamp} (without the brackets), signed by the private key of the Agent.timestamp: Unix timestamp of when the Authentication was signedvalidUntil (optional): Unix timestamp of when the Authentication should be no longer valid. If not provided, the server will default to 30 seconds from the timestamp.Here's what a JSON-AD Authentication Resource looks like for a WebSocket:
+{
+ "https://atomicdata.dev/properties/auth/agent": "http://example.com/agents/N32zQnZHoj1LbTaWI5CkA4eT2AaJNBPhWcNriBgy6CE=",
+ "https://atomicdata.dev/properties/auth/requestedSubject": "wss://example.com/ws",
+ "https://atomicdata.dev/properties/auth/publicKey": "N32zQnZHoj1LbTaWI5CkA4eT2AaJNBPhWcNriBgy6CE=",
+ "https://atomicdata.dev/properties/auth/timestamp": 1661757470002,
+ "https://atomicdata.dev/properties/auth/signature": "19Ce38zFu0E37kXWn8xGEAaeRyeP6EK0S2bt03s36gRrWxLiBbuyxX3LU9qg68pvZTzY3/P3Pgxr6VrOEvYAAQ=="
+}
+
+In this approach, the client creates and signs a Resource that proves that an Agent wants to access a certain server for some amount of time. +This Authentication Resource is stored as a cookie, and passed along in every HTTP request to the server.
+atomic_sessionSecure attribute to prevent Man-in-the-middle attacks over HTTPSimilar to creating the Cookie, except that we pass the base64 serialized Authentication Resource as a Bearer token in the Authorization header.
GET /myResource HTTP/1.1
+Authorization: Bearer {base64 serialized Authentication Resource}
+
+In Data Browser, you can find the token tab in /app/token to create a token.
After opening a WebSocket connection, create an Authentication Resource.
+Send a message like so: AUTHENTICATE {authenticationResource}.
+The server will only respond if there is something wrong.
Atomic Data allows signing every HTTP request. +This method is most secure, since a MITM attack would only give access to the specific resource requested, and only for a short amount of time. +Note that signing every single request takes a bit of time. +We picked a fast algorithm (Ed25519) to minimize this cost.
+All of the following headers are required, if you need authentication.
+x-atomic-public-key: The base64 public key (Ed25519) of the Agent sending the requestx-atomic-signature: A base64 signature of the following string: {subject} {timestamp}x-atomic-timestamp: The current time (when sending the request) as milliseconds since unix epochx-atomic-agent: The subject URL of the Agent sending the request.Here's an example (js) client side implementation with comments:
+// The Private Key of the agent is used for signing
+// https://atomicdata.dev/properties/privateKey
+const privateKey = "someBase64Key";
+const timestamp = Math.round(new Date().getTime());;
+// This is what you will need to sign.
+// The timestmap is to limit the harm of a man-in-the-middle attack.
+// The `subject` is the full HTTP url that is to be fetched.
+const message = `${subject} ${timestamp}`;
+// Sign using Ed25519, see example implementation here: https://github.com/atomicdata-dev/atomic-data-browser/blob/30b2f8af59d25084de966301cb6bd1ed90c0eb78/lib/src/commit.ts#L176
+const signed = await signToBase64(message, privateKey);
+// Set all of these headers
+const headers = new Headers;
+headers.set('x-atomic-public-key', await agent.getPublicKey());
+headers.set('x-atomic-signature', signed);
+headers.set('x-atomic-timestamp', timestamp.toString());
+headers.set('x-atomic-agent', agent?.subject);
+const response = await fetch(subject, {headers});
+
+x-atomic HTTP headers are present, the server assigns the PublicAgent to the request. This Agent represents any guest who is not signed in.x-atomic headers are present, the server will return with a 500.validUntil has not yet passed.read right in the resource or its parents).Atomic Data uses Hierarchies to describe who gets to access some resource, and who can edit it.
+Let's compare the Atomic Commit approach with some existing protocols for communicating state changes / patches / mutations / deltas in linked data, JSON and text files. +First, I'll briefly discuss the existing examples (open a PR / issue if we're missing something!). +After that, we'll discuss how Atomic Data differs from the existing ones.
+This might be an odd one in this list, but it is an interesting one nonetheless.
+Git is an incredibly popular version control system that is used by most software developers to manage their code.
+It's a decentralized concept which allows multiple computers to share a log of commits, which together represent a folder with its files and its history.
+It uses hashing to represent (parts of) data (which keeps the .git folder compact through deduplication), and uses cryptographic keys to sign commits and verify authorship.
+It is designed to work in the paradigm of text files, newlines and folders.
+Since most data can be represented as text files in a folder, Git is very flexible.
+This is partly because people are familiar with Git, but also because it has a great ecosystem - platforms such as Github provide a clean UI, cloud storage, issue tracking, authorization, authentication and more for free, as long as you use Git to manage your versions.
However, Git doesn't work great for structured data - especially when it changes a lot. +Git, on its own, does not perform any validations on integrity of data. +Git also does not adhere to some standardized serialization format for storing commits, which makes sense, because it was designed as a tool to solve a problem, and not as some standard that is to be used in various other systems. +Also, git is kind of a heavyweight abstraction for many applications. +It is designed for collaborating on open source projects, which means dealing with decentralized data storage and merge conflicts - things that might not be required in other kinds of scenarios.
+Let's move on to specifications that mutate RDF specifically:
+N3 Patch is part of the Solid spec, since december 2021.
+It uses the N3 serialization format to describe changes to RDF documents.
+@prefix solid: <http://www.w3.org/ns/solid/terms#>
+
+<> solid:patches <https://tim.localhost:7777/read-write.ttl>;
+ solid:where { ?a <y> <z>. };
+ solid:inserts { ?a <y> <z>. };
+ solid:deletes { ?a <b> <c>. }.
+
+https://afs.github.io/rdf-delta/
+Describes changes (RDF Patches) in a specialized turtle-like serialization format.
+TX .
+PA "rdf" "http://www.w3.org/1999/02/22-rdf-syntax-ns#" .
+PA "owl" "http://www.w3.org/2002/07/owl#" .
+PA "rdfs" "http://www.w3.org/2000/01/rdf-schema#" .
+A <http://example/SubClass> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
+A <http://example/SubClass> <http://www.w3.org/2000/01/rdf-schema#subClassOf> <http://example/SUPER_CLASS> .
+A <http://example/SubClass> <http://www.w3.org/2000/01/rdf-schema#label> "SubClass" .
+TC .
+
+Similar to Atomic Commits, these Delta's should have identifiers (URLs), which are denoted in a header.
+http://www.tara.tcd.ie/handle/2262/91407
+Spec for classifying and representing state changes between two RDF resources. +I wasn't able to find a serialization or an implementation for this.
+https://www.igi-global.com/article/patchr/135561
+An ontology for RDF change requests. +Looks very interesting, but I'm not able to find any implementations.
+prefix : <http://example.org/> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix pat: <http://purl.org/hpi/patchr#> .
+@prefix guo: <http://webr3.org/owl/guo#> .
+@prefix prov: <http://purl.org/net/provenance/ns#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix dbp: <http://dbpedia.org/resource/> .
+@prefix dbo: <http://dbpedia.org/ontology/> .
+
+:Patch_15 a pat:Patch ;
+ pat:appliesTo <http://dbpedia.org/void.ttl#DBpedia_3.5> ;
+ pat:status pat:Open ;
+ pat:update [
+ a guo:UpdateInstruction ;
+ guo:target_graph <http://dbpedia.org/> ;
+ guo:target_subject dbp:Oregon ;
+ guo:delete [dbo:language dbp:De_jure ] ;
+ guo:insert [dbo:language dbp:English_language ]
+ ] ;
+ prov:wasGeneratedBy [a prov:Activity ;
+ pat:confidence "0.5"^^xsd:decimal ;
+ prov:wasAssociatedWith :WhoKnows ;
+ prov:actedOnBehalfOf :WhoKnows#Player_25 ;
+ prov:performedAt "..."^^xsd:dateTime ] .
+
+https://www.w3.org/TR/ldpatch/
+This offers quite a few features besides adding and deleting triples, such as updating lists. +It's a unique serialization format, inspired by turtle. +Some implementations exists, such as one in ruby which is
+PATCH /timbl HTTP/1.1
+Host: example.org
+Content-Length: 478
+Content-Type: text/ldpatch
+If-Match: "abc123"
+
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+@prefix schema: <http://schema.org/> .
+@prefix profile: <http://ogp.me/ns/profile#> .
+@prefix ex: <http://example.org/vocab#> .
+
+Delete { <#> profile:first_name "Tim" } .
+Add {
+ <#> profile:first_name "Timothy" ;
+ profile:image <https://example.org/timbl.jpg> .
+} .
+
+Bind ?workLocation <#> / schema:workLocation .
+Cut ?workLocation .
+
+UpdateList <#> ex:preferredLanguages 1..2 ( "fr-CH" ) .
+
+Bind ?event <#> / schema:performerIn [ / schema:url = <https://www.w3.org/2012/ldp/wiki/F2F5> ] .
+Add { ?event rdf:type schema:Event } .
+
+Bind ?ted <http://conferences.ted.com/TED2009/> / ^schema:url ! .
+Delete { ?ted schema:startDate "2009-02-04" } .
+Add {
+ ?ted schema:location [
+ schema:name "Long Beach, California" ;
+ schema:geo [
+ schema:latitude "33.7817" ;
+ schema:longitude "-118.2054"
+ ]
+ ]
+} .
+
+https://github.com/ontola/linked-delta
+An N-Quads serialized delta format. +Methods are URLs, which means they are extensible. +Does not specify how to bundle lines. +Used in production of a web app that we're working on (Argu.co). +Designed with simplicity (no new serialization format, simple to parse) and performance in mind by my colleague Thom van Kalkeren.
+Initial state:
+
+<http://example.org/resource> <http://example.org/predicate> "Old value 🙈" .
+
+Linked-Delta:
+
+<http://example.org/resource> <http://example.org/predicate> "New value 🐵" <http://purl.org/linked-delta/replace> .
+
+New state:
+
+<http://example.org/resource> <http://example.org/predicate> "New value 🐵" .
+
+https://github.com/digibib/ls.ext/wiki/JSON-LD-PATCH
+A JSON denoted patch notation for RDF. +Seems similar to the RDF/JSON serialization format. +Uses string literals as operators / methods. +Conceptually perhaps most similar to linked-delta.
+Has a JS implementation.
+[
+ {
+ "op": "add",
+ "s": "http://example.org/my/resource",
+ "p": "http://example.org/ontology#title",
+ "o": {
+ "value": "New Title",
+ "type": "http://www.w3.org/2001/XMLSchema#string"
+ }
+ }
+]
+
+https://www.w3.org/TR/sparql11-update/
+SPARQL queries that change data.
+PREFIX dc: <http://purl.org/dc/elements/1.1/>
+INSERT DATA
+{
+ <http://example/book1> dc:title "A new book" ;
+ dc:creator "A.N.Other" .
+}
+
+Allows for very powerful queries, combined with updates.
+E.g. rename all persons named Bill to William:
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
+
+WITH <http://example/addresses>
+DELETE { ?person foaf:givenName 'Bill' }
+INSERT { ?person foaf:givenName 'William' }
+WHERE
+ { ?person foaf:givenName 'Bill'
+ }
+
+SPARQL Update is the most powerful of the formats, but also perhaps the most difficult to implement and understand.
+A simple way to edit JSON objects:
+The original document
+
+{
+ "baz": "qux",
+ "foo": "bar"
+}
+
+The patch
+
+[
+ { "op": "replace", "path": "/baz", "value": "boo" },
+ { "op": "add", "path": "/hello", "value": ["world"] },
+ { "op": "remove", "path": "/foo" }
+]
+
+The result
+
+{
+ "baz": "boo",
+ "hello": ["world"]
+}
+
+It uses the JSON-Pointer spec for denoting paths.
+It has quite a bunch of implementations, in various languages.
Let's talk about the differences between the concepts above and Atomic Commits.
+For starters, Atomic Commits can only work with a specific subset of RDF, namely Atomic Data. +RDF allows for blank nodes, does not have subject-predicate uniqueness and offers named graphs - which all make it hard to unambiguously select a single value. +Most of the alternative patch / delta models described above had to support these concepts. +Atomic Data is more strict and constrained than RDF. +It does not support named graphs and blank nodes. +This enables a simpler approach to describing state changes, but it also means that Atomic Commits will not work with most existing RDF data.
+Secondly, individual Atomic Commits are tightly coupled to specific Resources. +A single Commit cannot change multiple resources - and most of the models discussed above to enable this. +This is a big constraint, and it does not allow for things like compact migrations in a database. +However, this resource-bound constraint opens up some interesting possibilities:
+Thirdly, Atomic Commits don't introduce a new serialization format. +It's just JSON. +This means that it will feel familiar for most developers, and will be supported by many existing environments.
+Finally, Atomic Commits use cryptography (hashing) to determine authenticity of commits. +This concept is borrowed from git commits, which also uses signatures to prove authorship. +As is the case with git, this also allows for verifiable P2P sharing of changes.
+ +url: https://atomicdata.dev/classes/Commit
+A Commit is a Resource that describes how a Resource must be updated. +It can be used for auditing, versioning and feeds. +It is cryptographically signed by an Agent.
+The required fields are:
+subject - The thing being changed. A Resource Subject URL (HTTP identifier) that the Commit is changing about. A Commit Subject must not contain query parameters, as these are reserved for dynamic resources.signer - Who's making the change. The Atomic URL of the Author's profile - which in turn must contain a publicKey.signature - Cryptographic proof of the change. A hash of the JSON-AD serialized Commit (without the signature field), signed by the Agent's private-key. This proves that the author is indeed the one who created this exact commit. The signature of the Commit is also used as the identifier of the commit.created-at - When the change was made. A UNIX timestamp number of when the commit was created.The optional method fields describe how the data must be changed:
+destroy - If true, the existing Resource will be removed.remove - an array of Properties that need to be removed (including their values).set - a Nested Resource which contains all the new or edited fields.push - a Nested Resource which contains all the fields that are appended to. This means adding items to a new or existing ResourceArray.These commands are executed in the order above.
+This means that you can set destroy to true and include set, which empties the existing resource and sets new values.
Since Commits contains cryptographic proof of authorship, they can be accepted at a public endpoint. +There is no need for authentication.
+A commit should be sent (using an HTTPS POST request) to a /commmit endpoint of an Atomic Server.
+The server then checks the signature and the author rights, and responds with a 2xx status code if it succeeded, or an 5xx error if something went wrong.
+The error will be a JSON object.
Let's look at an example Commit:
+{
+ "@id": "https://atomicdata.dev/commits/3n+U/3OvymF86Ha6S9MQZtRVIQAAL0rv9ZQpjViht4emjnqKxj4wByiO9RhfL+qwoxTg0FMwKQsNg6d0QU7pAw==",
+ "https://atomicdata.dev/properties/createdAt": 1611489929370,
+ "https://atomicdata.dev/properties/isA": [
+ "https://atomicdata.dev/classes/Commit"
+ ],
+ "https://atomicdata.dev/properties/set": {
+ "https://atomicdata.dev/properties/shortname": "1611489928"
+ },
+ "https://atomicdata.dev/properties/signature": "3n+U/3OvymF86Ha6S9MQZtRVIQAAL0rv9ZQpjViht4emjnqKxj4wByiO9RhfL+qwoxTg0FMwKQsNg6d0QU7pAw==",
+ "https://atomicdata.dev/properties/signer": "https://surfy.ddns.net/agents/9YCs7htDdF4yBAiA4HuHgjsafg+xZIrtZNELz4msCmc=",
+ "https://atomicdata.dev/properties/previousCommit": "https://surfy.ddns.net/commits/9YCs7htDdF4yBAiA4HuHgjsafg+xZIrtZNELz4msCmc=",
+ "https://atomicdata.dev/properties/subject": "https://atomicdata.dev/test"
+}
+
+This Commit can be sent to any Atomic Server. +This server, in turn, should verify the signature and the author's rights before the server applies the Commit.
+The signature is a base64 encoded Ed25519 signature of the deterministically serialized Commit. +Calculating the signature is a delicate process that should be followed to the letter - even a single character in the wrong place will result in an incorrect signature, which makes the Commit invalid.
+The first step is serializing the commit deterministically. +This means that the process will always end in the exact same string.
+destroy is false, do not include it.This will result in a string. +The next step is to sign this string using the Ed25519 private key from the Author. +This signature is a byte array, which should be encoded in base64 for serialization. +Make sure that the Author's URL resolves to a Resource that contains the linked public key.
+Congratulations, you've just created a valid Commit!
+Here are currently working implementations of this process, including serialization and signing (links are permalinks).
+ +If you want validate your implementation, check out the tests for these two projects.
+If you're on the receiving end of a Commit (e.g. if you're writing a server or a client who has to parse Commits), you will apply the Commit to your Store. +If you have to persist the Commit, you must perform all of the checks. +If you're writing a client, and you trust the source of the Commit, you can probably skip the validation steps.
+Here's how you apply a Commit:
+previousCommit of the Commit matches with the previousCommit of the Resource.set fields. Overwrite existing, or add the new Values. Make sure the Datatypes match with the respective Properties.remove fields. Remove existing properties.Disclaimer: Work in progress, prone to change.
+Atomic Commits is a specification for communicating state changes (events / transactions / patches / deltas / mutations) of Atomic Data. +It is the part of Atomic Data that is concerned with writing, editing, removing and updating information.
+Although it's a good idea to keep data at the source as much as possible, we'll often need to synchronize two systems. +For example when data has to be queried or indexed differently than its source can support. +Doing this synchronization can be very difficult, since most of our software is designed to only maintain and share the current state of a system.
+I noticed this mainly when working on OpenBesluitvorming.nl - an open data project where we aimed to fetch and standardize meeting data (votes, meeting minutes, documents) from 150+ local governments in the Netherlands. +We wrote software that fetched data from various systems (who all had different models, serialization formats and APIs), transformed this data to a single standard and share it through an API and a fulltext search endpoint. +One of the hard parts was keeping our data in sync with the sources. +How could we now if something was changed upstream? +We queried all these systems every night for all meetings from the next and previous month, and made deep comparisons to our own data.
+This approach has a couple of issues:
+Persisting and sharing state changes could solve these issues. +In order for this to work, we need to standardize this for all data suppliers. +We need a specification that is easy to understand for most developers.
+Keeping track of where data comes from is essential to knowing whether you can trust it - whether you consider it to be true. +When you want to persist data, that quickly becomes bothersome. +Atomic Data and Atomic Commits aim to make this easier by using cryptography for ensuring data comes from some particular source, and is therefore trustworthy.
+If you want to know how Atomic Commits differ from other specs, see the compare section
+ +Atomic Data is a modular specification for sharing information on the web. +Since Atomic Data is a modular specification, you can mostly take what you want to use, and ignore the rest. +The Core part, however, is the only required part of the specification, as all others depend on it.
+Atomic Data Core can be used to express any type of information, including personal data, vocabularies, metadata, documents, files and more. +It's designed to be easily serializable to both JSON and linked data formats. +It is a typed data model, which means that every value must be validated by their datatype.
+dot.syntax, similar to how you navigate a JSON object in javascript.A Resource is a bunch of information about a thing, referenced by a single link (the Subject). +Formally, it is a set of Atoms (i.e. a Graph) that share a Subject URL. +You can think of a Resource as a single row in a spreadsheet or database. +In practice, Resources can be anything - a Person, a Blogpost, a Todo item. +A Resource consists of at least one Atom, so it always has some Property and some Value. +A Property can only occur once in every Resource.
+Every Resource is composed of Atoms. +The Atom is the smallest possible piece of meaningful data / information (hence the name). +You can think of an Atom as a single cell in a spreadsheet or database. +An Atom consists of three fields:
+If you're familiar with RDF, you'll notice similarities. +An Atom is comparable with an RDF Triple / Statement (although there are important differences).
+Let's turn this sentence into Atoms:
+Arnold Peters, who's born on the 20th of Januari 1991, has a best friend named Britta Smalls.
| Subject | Property | Value |
|---|---|---|
| Arnold | last name | Peters |
| Arnold | birthdate | 1991-01-20 |
| Arnold | best friend | Britta |
| Britta | last name | Smalls |
The table above shows human readable strings, but in Atomic Data, we use links (URLs) wherever we can. +That's because links are awesome. +Links remove ambiguity (we know exactly which person or property we mean), they are resolvable (we can click on them), and they are machine readable (machines can fetch links to do useful things with them). +So the table from above, will more closely resemble this one:
+| Subject | Property | Value |
|---|---|---|
| https://example.com/arnold | https://example.com/properties/lastname | Peters |
| https://example.com/arnold | https://example.com/properties/birthDate | 1991-01-20 |
| https://example.com/arnold | https://example.com/properties/bestFriend | https://example.com/britta |
| https://example.com/britta | https://example.com/properties/lastname | Smalls |
The standard serialization format for Atomic Data is JSON-AD, which looks like this:
+[{
+ "@id": "https://example.com/arnold",
+ "https://example.com/properties/lastname": "Peters",
+ "https://example.com/properties/birthDate": "1991-01-20",
+ "https://example.com/properties/bestFriend": "https://example.com/britta",
+},{
+ "@id": "https://example.com/britta",
+ "https://example.com/properties/lastname": "Smalls",
+}]
+
+The @id field denotes the Subject of each Resource, which is also the URL that should point to where the resource can be found.
In the JSON-AD example above, we have:
+https://example.com/arnold and https://example.com/britta.https://example.com/properties/lastname, https://example.com/properties/birthDate, and https://example.com/properties/bestFriend)Peters, 1991-01-20, https://example.com/britta and Smalls)All Subjects and Properties are Atomic URLs: they are links that point to more Atomic Data.
+One of the Values is a URL, too, but we also have values like Arnold and 1991-01-20.
+Values can have different Datatypes
+In most other data formats, the datatypes are limited and are visually distinct.
+JSON, for example, has array, object, string, number and boolean.
+In Atomic Data, however, datatypes are defined somewhere else, and are extendible.
+To find the Datatype of an Atom, you fetch the Property, and that Property will have a Datatype.
+For example, the https://example.com/properties/bornAt Property requires an ISO Date string, and the https://example.com/properties/firstName Property requires a regular string.
+This might seem a little tedious and weird at first, but is has some nice advantages!
+Their Datatypes are defined in the Properties.
The Subject field is the first part of an Atom.
+It is the identifier that the rest of the Atom is providing information about.
+The Subject field is a URL that points to the Resource.
+The creator of the Subject MUST make sure that it resolves.
+In other words: following / downloading the Subject link will provide you with all the Atoms about the Subject (see Querying Atomic Data.
+This also means that the creator of a Resource must make sure that it is available at its URL - probably by hosting the data, or by using some service that hosts it.
+In JSON-AD, the Subject is denoted by @id.
The Property field is the second part of an Atom. +It is a URL that points to an Atomic Property. +Examples can be found at https://atomicdata.dev/properties.
+ +The Property field MUST be a URL, and that URL MUST resolve (it must be publicly available) to an Atomic Property.
+The Property is perhaps the most important concept in Atomic Data, as it is what enables the type safety (thanks to datatype) and the JSON compatibility (thanks to shortname).
+We also use Properties for rendering fields in a form, because the Datatype, shortname and description helps us to create an intuitive, easy to understand input for users.
The Value field is the third part of an Atom.
+In RDF, this is called an object.
+Contrary to the Subject and Property values, the Value can be of any datatype.
+This includes URLs, strings, integers, dates and more.
A Graph is a collection of Atoms. +A Graph can describe various subjects, which may or may not be related. +Graphs can have several characteristics (Schema Complete, Valid, Closed)
+In mathematial graph terminology, a graph consists of nodes and edges.
+The Atomic Data model is a so called directed graph, which means that relationships are by default one-way.
+In Atomic Data, every node is a Resource, and every edge is a Property.
A Nested Resource only exists inside of another resource. +It does not have its own subject.
+In the next chapter, we'll explore how Atomic Data is serialized.
+ +Although you can use various serialization formats for Atomic Data, JSON-AD is the default and only required serialization format.
+It is what the current Rust and Typescript / React implementations use to communicate.
+It is designed to feel familiar to developers and to be easy and performant to parse and serialize.
+It is inspired by JSON-LD.
It uses JSON, but has some additional constraints:
+Resource.Property URL. Other keys are invalid. Each Property URL must resolve to an online Atomic Data Property.@id field is special: it defines the Subject of the Resource. If you send an HTTP GET request there with an content-type: application/ad+json header, you should get the full JSON-AD resource.@id subject) or a Named Nested Resource (with an @id subject). Everywhere a Subject URL can be used as a value (i.e. all properties with the datatype atomicURL), a Nested Resource can be used instead. This also means that an item in an ResourceArray can be a Nested Resource.@id), or an Array containing Named Resources. When you want to describe multiple Resources in one JSON-AD document, use an array as the root item.Let's look at an example JSON-AD Resource:
+{
+ "@id": "https://atomicdata.dev/properties/description",
+ "https://atomicdata.dev/properties/datatype": "https://atomicdata.dev/datatypes/markdown",
+ "https://atomicdata.dev/properties/description": "A textual description of something. When making a description, make sure that the first few words tell the most important part. Give examples. Since the text supports markdown, you're free to use links and more.",
+ "https://atomicdata.dev/properties/isA": [
+ "https://atomicdata.dev/classes/Property"
+ ],
+ "https://atomicdata.dev/properties/shortname": "description"
+}
+
+The mime type (for HTTP content negotiation) is application/ad+json (registration ongoing).
In JSON-AD, a Resource can be respresented in multiple ways:
+https://atomicdata.dev/classes/Class.@id field containing the Subject.@id field. This is only possible if it is a Nested Resource, which means that it has a parent Resource.Note that this is also valid for ResourceArrays, which usually only contain Subjects, but are allowed to contain Nested Resources.
In the following JSON-AD example, the address is a nested resource:
{
+ "@id": "https://example.com/arnold",
+ "https://example.com/properties/address": {
+ "https://example.com/properties/firstLine": "Longstreet 22",
+ "https://example.com/properties/city": "Watertown",
+ "https://example.com/properties/country": "the Netherlands",
+ }
+}
+
+Nested Resources can be named or anonymous. An Anonymous Nested Resource does not have it's own @id field.
+It does have its own unique path, which can be used as its identifier.
+The path of the anonymous resource in the example above is https://example.com/arnold https://example.com/properties/address.
When you need deterministic serialization of Atomic Data (e.g. when calculating a cryptographic hash or signature, used in Atomic Commits), you can use the following procedure:
+The last two steps of this process is more formally defined by the JSON Canonicalization Scheme (JCS, rfc8785).
+An Atomic Path is a string that consists of at least one URL, followed by one or more URLs or Shortnames. +Every single value in an Atomic Resource can be targeted through such a Path. +They can be used as identifiers for specific Values.
+The simplest path, is the URL of a resource, which represents the entire Resource with all its properties. +If you want to target a specific atom, you can use an Atomic Path with a second URL. +This second URL can be replaced by a Shortname, if the Resource is an instance of a class which has properties with that Shortname (sounds more complicated than it is).
+Let's start with this simple Resource:
+{
+ "@id": "https://example.com/john",
+ "https://example.com/lastName": "McLovin",
+}
+
+Then the following Path targets the McLovin value:
https://example.com/john https://example.com/lastName => McLovin
Instead of using the full URL of the lastName Property, we can use its shortname:
https://example.com/john lastname => McLovin
We can also traverse relationships between resources:
+[{
+ "@id": "https://example.com/john",
+ "https://example.com/lastName": "McLovin",
+ "https://example.com/employer": "https://example.com/XCorp",
+},{
+ "@id": "https://example.com/XCorp",
+ "https://example.com/description": "The greatest company!",
+}]
+
+https://example.com/john employer description => The greatest company!
In the example above, the XCorp subject exists and is the source of the The greatest company! value.
+We can use this path as a unique identifier for the description of John's current employer.
+Note that the data for the description of that employer does not have to be in John's control for this path to work - it can live on a totally different server.
+However, in Atomic Data it's also possible to include this description in the resource of John as a Nested Resource.
All Atomic Data Resources that we've discussed so far have an explicit URL as a subject. +Unfortunately, creating unique and resolvable URLs can be a bother, and sometimes not necessary. +If you've worked with RDF, this is what Blank Nodes are used for. +In Atomic Data, we have something similar: Nested Resources.
+Let's use a Nested Resource in the example from the previous section:
+{
+ "@id": "https://example.com/john",
+ "https://example.com/lastName": "McLovin",
+ "https://example.com/employer": {
+ "https://example.com/description": "The greatest company!",
+ }
+}
+
+Now the employer is simply a nested Object.
+Note that it no longer has its own @id.
+However, we can still identify this Nested Resource using its Path.
The Subject of the nested resource is its path: https://example.com/john https://example.com/employer, including the spacebar.
Note that the path from before still resolves:
+https://example.com/john employer description => The greatest company!
We can also navigate Arrays using paths.
+For example:
+{
+ "@id": "https://example.com/john",
+ "hasShoes": [
+ {
+ "https://example.com/name": "Mr. Boot",
+ },
+ {
+ "https://example.com/name": "Sunny Sandals",
+ }
+ ]
+}
+
+The Path of Mr. Boot is:
https://example.com/john hasShoes 0 name
+
+You can target an item in an array by using a number to indicate its position, starting with 0.
+Notice how the Resource with the name: Mr. Boot does not have an explicit @id, but it does have a Path.
+This means that we still have a unique, globally resolvable identifier - yay!
Install the atomic-cli software and run atomic-cli get https://atomicdata.dev/classes/Class description.
There are multiple ways of getting Atomic Data into some system:
+The simplest way of getting Atomic Data when the Subject is an HTTP URL, is by sending a GET request to the subject URL.
+Set the Content-Type header to an Atomic Data compatible mime type, such as application/ad+json.
GET https://atomicdata.dev/test HTTP/1.1
+Content-Type: application/ad+json
+
+The server SHOULD respond with all the Atoms of which the requested URL is the subject:
+HTTP/1.1 200 OK
+Content-Type: application/ad+json
+Connection: Closed
+
+{
+ "@id": "https://atomicdata.dev/test",
+ "https://atomicdata.dev/properties/shortname": "1611489928"
+}
+
+The server MAY also include other resources, if they are deemed relevant.
+Collections are Resources that provide simple query options, such as filtering by Property or Value, and sorting. +They also paginate resources. +Under the hood, Collections are powered by Triple Pattern Fragments. +Use query parameters to traverse pages, filter, or sort.
+ +An Atomic Path is a string that consist of one or more URLs, which when traversed point to an item.
+ +SPARQL is a powerful RDF query language. +Since all Atomic Data is also valid RDF, it should be possible to query Atomic Data using SPARQL. +None of the exsisting implementations support a SPARQL endpoint, though.
+accept header: curl -i -H "Accept: text/turtle" "https://atomicdata.dev")Atomic Data is not necessarily bound to a single serialization format.
+It's fundamentally a data model, and that's an important distinction to make.
+It can be serialized in different ways, but there is only one required: JSON-AD.
JSON-AD (more about that on the next page) is specifically designed to be a simple, complete and performant format for Atomic Data.
{
+ "@id": "https://atomicdata.dev/properties/description",
+ "https://atomicdata.dev/properties/datatype": "https://atomicdata.dev/datatypes/markdown",
+ "https://atomicdata.dev/properties/description": "A textual description of something. When making a description, make sure that the first few words tell the most important part. Give examples. Since the text supports markdown, you're free to use links and more.",
+ "https://atomicdata.dev/properties/isA": [
+ "https://atomicdata.dev/classes/Property"
+ ],
+ "https://atomicdata.dev/properties/parent": "https://atomicdata.dev/properties",
+ "https://atomicdata.dev/properties/shortname": "description"
+}
+
+
+Atomic Data is designed to be serializable to clean, simple JSON, for usage in (client) apps that don't need to know the full URLs of properties.
+{
+ "@id": "https://atomicdata.dev/properties/description",
+ "datatype": "https://atomicdata.dev/datatypes/markdown",
+ "description": "A textual description of something. When making a description, make sure that the first few words tell the most important part. Give examples. Since the text supports markdown, you're free to use links and more.",
+ "is-a": [
+ "https://atomicdata.dev/classes/Property"
+ ],
+ "parent": "https://atomicdata.dev/properties",
+ "shortname": "description"
+}
+
+Read more about JSON and Atomic Data
+Since Atomic Data is a strict subset of RDF, RDF serialization formats can be used to communicate and store Atomic Data, such as N-Triples, Turtle, HexTuples, JSON-LD and other RDF serialization formats. +However, not all valid RDF is valid Atomic Data. +Atomic Data is more strict. +Read more about serializing Atomic Data to RDF in the RDF interoperability section.
+JSON-LD:
+{
+ "@context": {
+ "datatype": {
+ "@id": "https://atomicdata.dev/properties/datatype",
+ "@type": "@id"
+ },
+ "description": "https://atomicdata.dev/properties/description",
+ "is-a": {
+ "@container": "@list",
+ "@id": "https://atomicdata.dev/properties/isA"
+ },
+ "parent": {
+ "@id": "https://atomicdata.dev/properties/parent",
+ "@type": "@id"
+ },
+ "shortname": "https://atomicdata.dev/properties/shortname"
+ },
+ "@id": "https://atomicdata.dev/properties/description",
+ "datatype": "https://atomicdata.dev/datatypes/markdown",
+ "description": "A textual description of something. When making a description, make sure that the first few words tell the most important part. Give examples. Since the text supports markdown, you're free to use links and more.",
+ "is-a": [
+ "https://atomicdata.dev/classes/Property"
+ ],
+ "parent": "https://atomicdata.dev/properties",
+ "shortname": "description"
+}
+
+Turtle / N-Triples:
+<https://atomicdata.dev/properties/description> <https://atomicdata.dev/properties/datatype> <https://atomicdata.dev/datatypes/markdown> .
+<https://atomicdata.dev/properties/description> <https://atomicdata.dev/properties/parent> <https://atomicdata.dev/properties> .
+<https://atomicdata.dev/properties/description> <https://atomicdata.dev/properties/shortname> "description"^^<https://atomicdata.dev/datatypes/slug> .
+<https://atomicdata.dev/properties/description> <https://atomicdata.dev/properties/isA> "https://atomicdata.dev/classes/Property"^^<https://atomicdata.dev/datatypes/resourceArray> .
+<https://atomicdata.dev/properties/description> <https://atomicdata.dev/properties/description> "A textual description of something. When making a description, make sure that the first few words tell the most important part. Give examples. Since the text supports markdown, you're free to use links and more."^^<https://atomicdata.dev/datatypes/markdown> .
+
+
+ JSON-AD is the default serialization format of Atomic Data. +It's just JSON, but with some extra requirements.
+Most notably, all keys are links to Atomic Properties. +These Properties must be actually hosted somewhere on the web, so other people can visit them to read more about them.
+Ideally, in JSON-AD, each Resource has its own @id.
+This is the URL of the resource.
+This means that if someone visits that @id, they should get the resource they are requesting.
+That's great for people re-using your data, but as a data provider, implementing this can be a bit of a hassle.
+That's why there is a different way that allows you to create Atomic Data without manually hosting every resource.
In this section, we'll create a single JSON-AD file containing various resources. +This file can then be published, shared and stored like any other.
+The goal of this preparation, is to ultimately import it somewhere else. +We'll be importing it to Atomic-Server. +Atomic-Server will create URLs for every single resource upon importing it. +This way, we only deal with the JSON-AD and the data structure, and we let Atomic-Server take care of hosting the data.
+Let's create a BlogPost.
+We know the fields that we need: a name and some body.
+But we can't use these keys in Atomic Data, we should use URLs that point to Properties.
+We can either create new Properties (see the Atomic-Server tutorial), or we can use existing ones, for example by searching on AtomicData.dev/properties.
{
+ "https://atomicdata.dev/properties/name": "Writing my first blogpost",
+ "https://atomicdata.dev/properties/description": "Hi! I'm a blogpost. I'm also machine readable!",
+}
+
+Classes help others understanding what a Resource's type is, such as BlogPost or Person. +In Atomic Data, Resources can have multiple classes, so we should use an Array, like so:
+{
+ "https://atomicdata.dev/properties/name": "Writing my first blogpost",
+ "https://atomicdata.dev/properties/description": "Hi! I'm a blogpost. I'm also machine readable!",
+ "https://atomicdata.dev/properties/isA": ["https://atomicdata.dev/classes/Article"],
+}
+
+Adding a Class helps people to understand the data, and it can provide guarantees to the data users about the shape of the data: they now know which fields are required or recommended. +We can also use Classes to render Forms, which can be useful when the data should be edited later. +For example, the BlogPost item
+Ontologies are groups of concepts that describe some domain. +For example, we could have an Ontology for Blogs that links to a bunch of related Classes, such as BlogPost and Person. +Or we could have a Recipy Ontology that describes Ingredients, Steps and more.
+At this moment, there are relatively few Classes created in Atomic Data. +You can find most on atomicdata.dev/classes.
+So possibly the best way forward for you, is to define a Class using the Atomic Data Browser's tools for making resources.
+If we want to have multiple items, we can simply use a JSON Array at the root, like so:
+[{
+ "https://atomicdata.dev/properties/name": "Writing my first blogpost",
+ "https://atomicdata.dev/properties/description": "Hi! I'm a blogpost. I'm also machine readable!",
+ "https://atomicdata.dev/properties/isA": ["https://atomicdata.dev/classes/Article"],
+},{
+ "https://atomicdata.dev/properties/name": "Another blogpost",
+ "https://atomicdata.dev/properties/description": "I'm writing so much my hands hurt.",
+ "https://atomicdata.dev/properties/isA": ["https://atomicdata.dev/classes/Article"],
+}]
+
+localIdWhen we want to publish Atomic Data, we also want someone else to be able to import it. +An important thing to prevent, is data duplication. +If you're importing a list of Blog posts, for example, you'd want to only import every article once.
+The way to preventing duplication, is by adding a localId.
+This localId is used by the importer to find out if it has already imported it before.
+So we, as data producers, need to make sure that our localId is unique and does not change!
+We can use any type of string that we like, as long as it conforms to these requirements.
+Let's use a unique slug, a short name that is often used in URLs.
{
+ "https://atomicdata.dev/properties/name": "Writing my first blogpost",
+ "https://atomicdata.dev/properties/description": "Hi! I'm a blogpost. I'm also machine readable!",
+ "https://atomicdata.dev/properties/isA": ["https://atomicdata.dev/classes/Article"],
+ "https://atomicdata.dev/properties/localId": "my-first-blogpost",
+}
+
+localIdLet's say we also want to describe the author of the BlogPost, and give them an e-mail, a profile picture and some biography.
+This means we need to create a new Resource for each Author, and again have to think about the properties relevant for Author.
+We'll also need to create a link from BlogPost to Author, and perhaps the other way around, too.
Normally, when we link things in Atomic Data, we can only use full URLs.
+But, since we don't have URLs yet for our Resources, we'll need a different solution.
+Again, this is where we can use localId!
+We can simply refer to the localId, instead of some URL that does not exist yet.
[{
+ "https://atomicdata.dev/properties/name": "Writing my first blogpost",
+ "https://atomicdata.dev/properties/description": "Hi! I'm a blogpost. I'm also machine readable!",
+ "https://atomicdata.dev/properties/author": "jon",
+ "https://atomicdata.dev/properties/isA": ["https://atomicdata.dev/classes/Article"],
+ "https://atomicdata.dev/properties/localId": "my-first-blogpost",
+},{
+ "https://atomicdata.dev/properties/name": "Another blogpost",
+ "https://atomicdata.dev/properties/description": "I'm writing so much my hands hurt.",
+ "https://atomicdata.dev/properties/author": "jon",
+ "https://atomicdata.dev/properties/isA": ["https://atomicdata.dev/classes/Article"],
+ "https://atomicdata.dev/properties/localId": "another-blogpost",
+},{
+ "https://atomicdata.dev/properties/name": "Jon Author",
+ "https://atomicdata.dev/properties/isA": ["https://atomicdata.dev/classes/Person"],
+ "https://atomicdata.dev/properties/localId": "jon",
+}]
+
+currently under development
+ +URL: https://atomicdata.dev/classes/Endpoint
+An Endpoint is a resource that accepts parameters in order to generate a response. +You can think of it like a function in a programming language, or a API endpoint in an OpenAPI spec. +It can be used to perform calculations on the server side, such as filtering data, sorting data, selecting a page in a collection, or performing some calculation. +Because Endpoints are resources, they can be defined and read programmatically. +This means that it's possible to render Endpoints as forms.
+The most important property in an Endpoint is parameters, which is the list of Properties that can be filled in.
You can find a list of Endpoints supported by Atomic-Server on atomicdata.dev/endpoints.
+Endpoint Resources are dynamic, because their properties could be calculated server-side.
+When a Property tends to be calculated server-side, they will have a isDynamic property set to true, which tells the client that it's probably useless to try to overwrite it.
A Server can also send one or more partial Resources for an Endpoint to the client, which means that some properties may be missing.
+When this is the case, the Resource will have an incomplete property set to true.
+This tells the client that it has to individually fetch the resource from the server to get the full body.
One scenario where this happens, is when fetching Collections that have other Collections as members. +If we would not have incomplete resources, the server would have to perform expensive computations even if the data is not needed by the client.
+Atomic Data is a modular specification, which means that you can choose to implement parts of it. +All parts of Extended are optional to implement. +The Core of the specification (described in the previous chapter) is required for all of the Extended spec to work, but not the other way around.
+However, many of the parts of Extended do depend on eachother.
+The Atomic Data model (Atomic Schema) is great for describing structured data, but for many types of existing data, we already have a different way to represent them: files. +In Atomic Data, files have two URLs. +One describes the file and its metadata, and the other is a URL that downloads the file. +This allows us to present a better view when a user wants to take a look at some file, and learn about its context before downloading it.
+url: https://atomicdata.dev/classes/File
+Files always have a downloadURL.
+They often also have a filename, a filesize, a checksum, a mimetype, and an internal ID (more on that later).
+They also often have a parent, which can be used to set permissions / rights.
In atomic-server, a /upload endpoint exists for uploading a file.
parent. Make sure you have write rights on this parent./upload endpoint, e.g. /upload?parent=https%3A%2F%2Fatomicdata.dev%2Ffiles.POST request to the server's /upload endpoint containing multi-part-form-data. You can upload multiple files in one request. Add authentication headers, and sign the HTTP request with theSimply send an HTTP GET request to the File's download-url (make sure to authenticate this request).
Atomic Data is an open specification, and that means that you're very welcome to share your thoughts and help make this standard as good as possible.
+Things you can do:
+Hierarchies help make information easier to find and understand. +For example, most websites use breadcrumbs to show you where you are. +Your computer probably has a bunch of drives and deeply nested folders that contain files. +We generally use these hierarchical elements to keep data organized, and to keep a tighter grip on rights management. +For example, sharing a specific folder with a team, but a different folder could be private.
+Although you are free to use Atomic Data with your own custom authorization system, we have a standardized model that is currently being used by Atomic-Server.
+parent. There are some exceptions to this, which are discussed below.parent of some other Resource, as long as both Resources exists on the same Atomic Server.parent also apply to all children, and their children.parents:read and write Atoms. These both contain a list of Agents. These Agents will be granted the rights to edit (using Commits) or read / use the Resources.write Atom containing your Agent, but it's parent does have one, you will still get the write right.Commits can not be edited. They can be read if the Agent has rights to read the subject of the Commit.Some resources are special, as they do not require a parent:
Drives are top-level items in the hierarchy: they do not have a parent.Agents are top-level items because they are not owned by anything. They can always read and write themselves.Commits are immutable, so they should never be edited by anyone. That's why they don't have a place in the hierarchy. Their read rights are determined by their subject.Authentication is about proving who you are, which is often the first step for authorization. See authentication.
+The specification is growing (and please contribute in the docs repo), but the current specification lacks some features:
+Atomic Data is a modular specification for sharing, modifying and modeling graph data. It combines the ease of use of JSON, the connectivity of RDF (linked data) and the reliability of type-safety.
+Atomic Data uses links to connect pieces of data, and therefore makes it easier to connect datasets to each other - even when these datasets exist on separate machines.
+Atomic Data has been designed with the following goals in mind:
+Atomic Data is Linked Data, as it is a strict subset of RDF.
+It is type-safe (you know if something is a string, number, date, URL, etc.) and extensible through Atomic Schema, which means that you can re-use or define your own Classes, Properties and Datatypes.
The default serialization format for Atomic Data is JSON-AD, which is simply JSON where each key is a URL of an Atomic Property.
+These Properties are responsible for setting the datatype (to ensure type-safety) and setting shortnames (which help to keep names short, for example in JSON serialization) and descriptions (which provide semantic explanations of what a property should be used for).
Read more about Atomic Data Core
+Atomic Data Extended is a set of extra modules (on top of Atomic Data Core) that deal with data that changes over time, authentication, and authorization.
+Atomic Data has been designed to be very easy to create and host. +In the Atomizing section, we'll show you how you can create Atomic Data in three ways:
+docker run -p 80:80 -v atomic-storage:/atomic-storage joepmeneer/atomic-server)cargo install atomic-cli)Make sure to join our Discord if you'd like to discuss Atomic Data with others.
+Keep in mind that none of the Atomic Data projects has reached a v1, which means that breaking changes can happen.
+This is written mostly as a book, so reading it in the order of the Table of Contents will probably give you the best experience. +That being said, feel free to jump around - links are often used to refer to earlier discussed concepts. +If you encounter any issues while reading, please leave an issue on Github. +Use the arrows on the side / bottom to go to the next page.
+