diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml deleted file mode 100644 index 63116eb..0000000 --- a/.github/workflows/deploy.yml +++ /dev/null @@ -1,31 +0,0 @@ -name: ci -on: - push: - branches: - - master - - main -permissions: - contents: write -jobs: - deploy: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - name: Configure Git Credentials - run: | - git config user.name github-actions[bot] - git config user.email 41898282+github-actions[bot]@users.noreply.github.com - - uses: actions/setup-python@v5 - with: - python-version: 3.x - - run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV - - uses: actions/cache@v4 - with: - key: mkdocs-material-${{ env.cache_id }} - path: .cache - restore-keys: | - mkdocs-material- - - run: pip install mkdocs-material mkdocs-glightbox - working-directory: phospho-mkdocs - - run: mkdocs gh-deploy --force - working-directory: phospho-mkdocs \ No newline at end of file diff --git a/.gitignore b/.gitignore deleted file mode 100644 index fe239fa..0000000 --- a/.gitignore +++ /dev/null @@ -1,4 +0,0 @@ -.DS_Store -node_modules/ -jupyter-test-PA.ipynb -.venv \ No newline at end of file diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/.vscode/settings.json b/.vscode/settings.json deleted file mode 100644 index 7c2feb7..0000000 --- a/.vscode/settings.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "editor.formatOnSave": false -} diff --git a/404.html b/404.html new file mode 100644 index 0000000..7249874 --- /dev/null +++ b/404.html @@ -0,0 +1,2267 @@ + + + + + + + + + + + + + + + + + + + phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/README.md b/README.md deleted file mode 100644 index 3235466..0000000 --- a/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# ๐Ÿงช phospho documentation - -This repo contains user-facing documentation [for phospho products](https://phospho.ai) diff --git a/analytics/ab-test/index.html b/analytics/ab-test/index.html new file mode 100644 index 0000000..ef12f7d --- /dev/null +++ b/analytics/ab-test/index.html @@ -0,0 +1,2531 @@ + + + + + + + + + + + + + + + + + + + + + + + + AB Testing - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

AB Testing

+ +

AB testing lets you compare different versions of your app to see which one performs better.

+

AB tests

+

What is AB testing

+

AB testing is a method used to compare two versions of a product to determine which performs better.

+

Comparing on a single criteria is hard, especially for LLM apps. Indeed, the performance of a product can be measured in many ways.

+

In phosho, the way AB testing is done is by comparing the analytics distribution of two versions: the candidate one and the control one.

+

Prerequisites to run an AB test

+

You need to have setup event detection in your project. This will run analytics to measure the performance of your app:

+
    +
  • Tags: eg. topic of the conversation
  • +
  • Scores: eg. sentiment of the conversation (between 1 and 5)
  • +
  • Classifiers: eg. user intent ("buy", "ask for help", "complain")
  • +
+

Run an AB test from the platform

+
    +
  1. +

    Click on the button "Create an AB test" on the phospho platform. If you want, customize the version_id, which is the name of the test.

    +
  2. +
  3. +

    Send data to the platform by using an SDK, an integration, a file, or more. All new incomming messages will be tagged with the version_id.

    +
  4. +
+

Alternative: Specify the version_id in your code

+

Alternatively, you can specify the version_id in your code. This will override the version_id set in the platform.

+

When logging to phospho, add a field version_id with the name of your version in metadata. See the example below:

+
+
+
+
log = phospho.log(
+    input="log this",
+    output="and that",
+    version_id="YOUR_VERSION_ID"
+)
+
+
+
+
log = phospho.log({
+input: "log this",
+output: "and that",
+version_id:"YOUR_VERSION_ID",
+});
+
+
+
+
curl -X POST https://api.phospho.ai/v2/log/$PHOSPHO_PROJECT_ID \
+-H "Authorization: Bearer $PHOSPHO_API_KEY" \
+-H "Content-Type: application/json" \
+-d '{
+    "batched_log_events": [
+        {
+            "input": "your_input",
+            "output": "your_output"
+            "metadata": {
+                "version_id": "YOUR_VERSION_ID"
+            }
+        }
+    ]
+}'
+
+
+
+
+

Run offline tests

+

If you want to run offline tests, you can use the phospho command line interface. Results of the offline tests are also available in the AB test tab.

+
+
    +
  • +

    phospho CLI

    +
    +

    Learn more about the phospho command line interface

    +

    Read more

    +
  • +
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/clustering/index.html b/analytics/clustering/index.html new file mode 100644 index 0000000..711525f --- /dev/null +++ b/analytics/clustering/index.html @@ -0,0 +1,2521 @@ + + + + + + + + + + + + + + + + + + + + + + + + Clustering - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Clustering

+ +

Clustering lets your group user messages based on their intention. This is great to get a feeling of "what are my users talking about?" and to identify the most common topics.

+

Clustering

+

How it works

+

The phospho clustering uses a combination of user intent embedding and unsupervized clustering algorithms to group messages together.

+

The user intent embedding is a representation of the user intention in a high dimensional space. This representation is generated using a deep learning model trained on a large dataset of user messages. Learn more here.

+

We are constantly evaluating and improving the clustering algorithms to provide the best results.

+

How to run a clustering

+

To use the clustering feature, you need to have a phospho account and an API key. You can get one by signing up on phospho.ai.

+
    +
  1. +

    Import data. If not already done, import your data and setup a payment method.

    +
  2. +
  3. +

    Configure clustering. Go to the Clusters tab and click on the Configure clustering detection button. + Select the scope of data to cluster: either messages or sessions. + Filter the data by setting a date range, a specific tag, and more.

    +
  4. +
  5. +

    Run clustering. + Click on the Run cluster analysis button to start the clustering process. Depending on the number of messages, it can take a few minutes.

    +
  6. +
+

+

+

How to interpret the results

+

The clustering results are presented in two formats:

+
    +
  • +

    3D Dot Cloud Graph: Each point in the graph corresponds to an embedding of a message (or a session). Clusters are distinct groups of these points.

    +
  • +
  • +

    Cluster Cards: Each cluster is also displayed as a card. The card shows the cluster size and an automatic summary of a random sample of messages. Click on "Explore" in any card to view the messages in the cluster.

    +
  • +
+

How to run a clustering with a custom instruction?

+

By default, the clustering is run based on: user intent

+

You can however modify this instruction in Advanced settings.

+

Change the clustering instruction to refine how messages are grouped, to provide insights that are more aligned with your needs. You just need to enter the topic you want to cluster on.

+

Examples of what you can enter: +- For a medical chatbot: type of disease +- For a customer support chatbot: type of issue (refund, delivery, etc.) +- For a chatbot in the e-commerce industry: product mentioned

+

How to run a custom clustering algorithms?

+

You can use the user intent embeddings to run your own clustering algorithms. The embeddings are available through the API. Learn more here.

+

Next steps

+

Based on the clusters, define more analytics to run on your data in order to never miss a beat on what your users are talking about. Check the event detection page for more information.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/evaluation/index.html b/analytics/evaluation/index.html new file mode 100644 index 0000000..77dbfc5 --- /dev/null +++ b/analytics/evaluation/index.html @@ -0,0 +1,2421 @@ + + + + + + + + + + + + + + + + + + + + Automatic evaluation - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Automatic evaluation

+ +

phospho enables you to evaluate the quality (success or failure) of the interactions between your users and your LLM app.

+

Every time you log a task, phospho will automatically evaluate the success of the task.

+

How does phospho evaluate tasks?

+

The evaluation is based on LLM self-critique.

+

The evaluation leverages the following sources of information: +- The tasks annotated in the phospho webapp, by you and your team +- The user feedbacks sent to phospho +- The system_prompt (str) parameter in metadata when logging +- Previous tasks in the same session

+

If the information are not available, phospho will use default heuristics.

+

How to improve the automatic evaluation?

+

To improve the automatic evaluation, you can: +- Label tasks in the phospho webapp. Invite your team members to help you! +- Gather user feedback +- Pass the system_prompt (str) parameter in metadata when logging +- Group tasks in sessions +- Override the task evaluations with the analytics endpoints

+

Annotate in the phospho webapp

+

In the phospho dashboard, you can annotate tasks as a success or a failure.

+

Thumbs up / Thumbs down

+

In the Transcript tab, view tasks to access the thumbs up and thumbs down buttons. +- A thumbs up means that the task was successful. +- A thumbs down means that the task failed.

+

Update the evaluation by clicking on the thumbs.

+

The button changes color to mark that this task was evaluated by a human, and not by phospho.

+

Notes

+

Add notes and any kind of text with the Notes button next to the thumbs.

+

If there is a note already written, the color of the button changes.

+

Annotate with User feedback

+

You can gather annotations any way you want. For example, if you have your own tool to collect feedback (such as thumbs up/thumbs down in your chat interface), you can chose to use the phospho API.

+

Trigger the API endpoint to send your annotations to phospho at scale.

+

Read the full guide about user feedback to learn more.

+

Visualize the results

+

Visualize the aggregated results of the evaluations in the Dashboard tab of the phospho webapp.

+

You can also visualize the results for each task in the Sessions tab. Click on a session to see the list of tasks in the session.

+

A green thumbs up means that the task was successful. A red thumbs down means that the task failed. Improve the automatic evaluation by clicking on the thumbs to annotate the task if needed.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/events/index.html b/analytics/events/index.html new file mode 100644 index 0000000..96d5f70 --- /dev/null +++ b/analytics/events/index.html @@ -0,0 +1,2507 @@ + + + + + + + + + + + + + + + + + + + + + + + + Event detection - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Event detection

+ +

Learn how to define and run events in phospho, and also how they work under the hood and how to improve them.

+

What are events in phospho?

+

Events are actions or behaviours that you want to track in your data. There are three types of events:

+
    +
  • Tags: Tags are detected in the data and can be used to filter data. Tags are described in natural language. Tags are either present, or not present in a message.
  • +
  • Scores: Scores are values between 1 and 5 that are assigned to a message. Scores can be used to track the quality of the conversation.
  • +
  • Categories: Categories are the result of a classification. Use categories to classify messages in different classes. For example, if you have a set of user intents, you can classify messages in these intents.
  • +
+

Create an event

+

An event is a specific interaction between a user and the system you want to track.

+

To define an event, go to the Events tab in the phospho platform and click on the Add button.

+

Add Event

+

In this tab you can setup events in natural language, in this image, we have setup an event to detect when the system is unable to answer the user's question.

+

By default, events are detected on all the newly imported data, but not on the past data. You need to run the events on the past data to get insights.

+

Run events on imported data

+

Once you've defined your events, you need to run them on past data.

+

Click on the Detect events button in the Events tab to run an event on your data.

+

Detect events

+

How are events detected?

+

Every message logged to phospho goes through an analytics pipeline. In this pipeline, phospho looks for events defined in your project settings.

+

This pipeline uses a combination of rules, machine learning, and large language models to detect events. The rules are defined in the Analytics tab of the phospho dashboard.

+

How good is the event detection?

+

To help you keep track and improve the event detection, phospho enables you annotate and validate the events detected in your data.

+

Click on an event in the Transcripts to annotate it. This will display a dropdown where you can validate, remove or edit the event.

+

Advanced performance metrics (F1 Score, Accuracy, Recall, Precision, R-squared, MSE) are available when you click on an event in the Analytics tab.

+

Automatic improvement of the event detection

+

The event detection models are automatically improved and updated using your feedback.

+

+The more you annotate and validate the events on the platform, the better the events become ! +

+

Click on an event in the Transcripts to annotate it. This displays a dropdown where you can validate, remove or edit the event.

+

We are constantly improving our algorithms to provide the best results. We're an open source project, so feel free to open an issue on our GitHub or contribute to the codebase. We would love to hear from you!

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/fine-tuning/index.html b/analytics/fine-tuning/index.html new file mode 100644 index 0000000..2062784 --- /dev/null +++ b/analytics/fine-tuning/index.html @@ -0,0 +1,2420 @@ + + + + + + + + + + + + + + + + + + + + Event Fine-tuning - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Event Fine-tuning

+ +

+ LLM fine-tuning for event detection is in Alpha. Contact us to request access. +

+

Preparing your data

+

To fine-tune a model for event detection, you need to prepare a csv dataset that contains the following columns:

+
    +
  • detection_scope (Literal): can only be one of the following values: task_input_only or task_output_only
  • +
  • task_input (str): the input text for a task (uusually the user input)
  • +
  • task_output (str): the output text for a task (usually the assistant response)
  • +
  • event_description (str): the event description, like the prompt you use to define the event you want to dectect while using phospho
  • +
  • label (bool): True if the event is indeed present in the text, False otherwise
  • +
+

A good dataset size is at least 2000 examples.

+

Uploading the dataset to phospho

+

To upload the dataset to phospho, use directly the API. Don't forget to set your API key in the Authorization header.

+
curl -X 'POST' \
+  'https://api.phospho.ai/v2/files' \
+  -H 'accept: application/json' \
+  -H 'Authorization: Bearer $PHOSPHO_API_KEY' \
+  -H 'Content-Type: multipart/form-data' \
+  -F 'file=@/path/to/your/local/file.csv.csv;type=text/csv'
+
+

Keep the file_id returned by the API, you will need it to fine-tune the model.

+

Launching the fine-tuning

+

We recomend using the mistralai/Mistral-7B-Instruct-v0.1 model for event detection. +Once the dataset is uploaded, you can fine-tune the model using the following API call:

+
curl -X 'POST' \
+  'https://api.phospho.ai/v2/fine_tuning/jobs' \
+  -H 'accept: application/json' \
+  -H 'Authorization: Bearer $PHOSPHO_API_KEY' \
+  -H 'Content-Type: application/json' \
+  -d '{
+  "file_id": "YOUR_FILE_ID",
+  "parameters": {"detection_scope": "YOUR_DETECTION_SCOPE", "event_description": "YOUR EVENT DESCRIPTION HERE"},
+  "model": "mistralai/Mistral-7B-Instruct-v0.1"
+}'
+
+

Note the fine-tuning id returned by the API, you will need it to check the status of the job. It should take approximately 20 minutes to complete.

+

The finetuning job will take some time to complete. You can check the status of the job using the following API call:

+
curl -X 'GET' \
+  'https://api.phospho.ai/v2/fine_tuning/jobs/FINE_TUNING_JOB_ID' \
+  -H 'accept: application/json' \
+  -H 'Authorization: Bearer $PHOSPHO_API_KEY'
+
+

When the fine-tuning job is completed, you can get the fine-tuned model id in the fine_tuned_model field of the response.

+

Using the fine-tuned model for your event detection

+

You can now use the fine-tuned model to detect events in your text. To do so, update the configs.

+

First, get your current project settings:

+
curl -X 'GET' \
+  'https://api.phospho.ai/v2/projects/YOUR_PROJECT_ID' \
+  -H 'accept: application/json' \
+  -H 'Authorization: Bearer $PHOSPHO_API_KEY'
+
+

+ The POST request will overwrite the current project settings. Make sure to + include all the settings you want to keep in the new settings object. +

+

In the settings object, add (or change) the detection_engine to the fine_tuned_model id you got from the fine-tuning job. Then, update the project settings:

+
curl -X 'POST' \
+  'https://api.phospho.ai/v2/projects/YOUR_PROJECT_ID' \
+  -H 'accept: application/json' \
+  -H 'Authorization: Bearer $PHOSPHO_API_KEY' \
+  -H 'Content-Type: application/json' \
+  -d '{
+  "settings": YOUR_UPDATED_SETTINGS_OBJECT
+}'
+
+

You're all set! You can now use the fine-tuned model to detect events in your text.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/language/index.html b/analytics/language/index.html new file mode 100644 index 0000000..acd33dd --- /dev/null +++ b/analytics/language/index.html @@ -0,0 +1,2344 @@ + + + + + + + + + + + + + + + + + + + + + + + + Language Detection - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Language Detection

+ +

Detect what language your users are speaking in. This lets you analyze in what language your users are interacting with your assistant, and improve it accordingly.

+

Language detection is based on the user message, so the interaction below will be flagged as english, despite the assistant answering in French.

+ + + + + + + + + + + + + +
UserAssistant
What can you do?Je ne peux pas rรฉpondre en anglais
+

The language detection method is based on keywords. If the input is very short, the language detection might not be accurate.

+

In the Transcripts, you can filter by language.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/sentiment-analysis/index.html b/analytics/sentiment-analysis/index.html new file mode 100644 index 0000000..899cfdd --- /dev/null +++ b/analytics/sentiment-analysis/index.html @@ -0,0 +1,2336 @@ + + + + + + + + + + + + + + + + + + + + + + + + Sentiment Analysis - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Sentiment Analysis

+ +

Detect the sentiment of your users. An automatic sentiment analysis is performed on the user message. This lets you know whether your users are happy, sad, or neutral.

+

The sentiment and its magnitude are score. This corresponds to a negative or positive sentiment and how strong it is.

+

We then translate this data into a simple, readable label for you: Positive, Neutral, Mixed and Negative.

+
    +
  • Positive: The sentiment score is greater than 0.3
  • +
  • Neutral: The sentiment score is between -0.3 and 0.3
  • +
  • Mixed: The sentiment score is between -0.3 and 0.3 but the magnitude is greater than 0.6
  • +
  • Negative: The sentiment score is less than -0.3
  • +
+

You can also filter your data by sentiment in the Transcripts.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/sessions-and-users/index.html b/analytics/sessions-and-users/index.html new file mode 100644 index 0000000..18ea86a --- /dev/null +++ b/analytics/sessions-and-users/index.html @@ -0,0 +1,2739 @@ + + + + + + + + + + + + + + + + + + + + + + + + Sessions and Users - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Sessions and Users

+ +

A task is a single operation made by the user. For example, a user sending a question to ChatGPT and receiving an answer is a task.

+

A session groups multiple tasks that happen in the same context. For example, multiple messages in the same ChatGPT chat is a session.

+

A user is the end user of your LLM app. For example, the human chatting with ChatGPT.

+
+

Info

+

Tasks, sessions and users are just abstractions. They are meant to help you understand the context of a log. You can use them as you want.

+

For example, +- A task can be "Fetch documents in a database" for a RAG. +- A session can be "The code completions in a single file" for a coding copilot. +- A user can be "The microservice querying the API" for a question answering model.

+
+

Tasks

+

Inputs and Outputs

+

A task is made of an input and an optional output, which are text readable by humans. Think of them as the messages in a chat.

+

On top of that, you can pass a raw_input and a raw_output. Those are the raw data that your LLM app received and produced. They are mostly meant for the developers of your LLM app.

+

Metadata

+

To help you understand the context of a task, you can pass a metadata dict to your tasks.

+

For example, the version of the model used, the generation time, the system prompt, the user_id, etc.

+
+
+
+
import phospho
+
+phospho.init()
+
+phospho.log(
+    input="What is the meaning of life?",
+    output="42",
+    # Metadata
+    raw_input={"chat_history": ...},
+    metadata={
+        "system_prompt": "You are a helpful assistant.",
+        "version_id": "1.0.0",
+        "generation_time": 0.1,
+    },
+)
+
+
+
+
import { phospho } from "phospho";
+
+phospho.init();
+
+phospho.log({
+    input: "What is the meaning of life?",
+    output: "42",
+    // Metadata
+    raw_input={"chat_history": ...},
+    metadata={
+        "system_prompt": "You are a helpful assistant.",
+        "version_id": "1.0.0",
+        "generation_time": 0.1,
+    },
+});
+
+
+
+
+

The metadata is a dictionary that can contain any key-value pair. We recommend to stick to str keys and str or float values.

+

Note that the output is optional, but the input is required.

+

Special metadata keys

+
    +
  • system_prompt: The prompt used to generate the output. It will be displayed separately in the UI.
  • +
  • version_id: The version of the app. Used for AB testing.
  • +
  • user_id: The id of the user. Used for user analytics.
  • +
+

Tasks are not just calls to LLMs

+

A task can be a call to a LLM. But it can also be something completely different.

+

For example, a task can be a call to a database, or the result of a complex chain of thought.

+

Tasks are an abstraction that you can use as you want.

+

Task Id

+

By default, when logging, a task id is automatically generated for you.

+

Generating your own task id is useful to attach user feedback later on (on this topic, see User Feedback).

+

Sessions

+

If you're using phospho in a conversational app such a chatbot, group tasks together into sessions.

+
    +
  • Sessions are easier to read for humans.
  • +
  • They improve evaluations and event detections by providing context.
  • +
  • They help you understand the user journey.
  • +
+

Session Id

+

To create sessions, pass a session_id when logging.

+

The session id can be any string. However, we recommend to use a UUID generated by a random hash function. We provide a helper function to generate a session id.

+
+
+
+
session_id = phospho.new_session()
+
+phospho.log(
+    input="What is the meaning of life?",
+    output="42",
+    session_id=session_id,
+)
+
+
+
+
const sessionId = phospho.newSession();
+
+phospho.log({
+    input: "What is the meaning of life?",
+    output: "42",
+    sessionId: sessionId,
+});
+
+
+
+
import phospho
+from phospho.integrations import PhosphoLangchainCallbackHandler
+
+session_id = phospho.new_session()
+
+response = retrieval_chain.invoke(
+    "Chain input", 
+    config={"callbacks": [
+        # Pass the session_id to the callback
+        PhosphoLangchainCallbackHandler(session_id=session_id)
+    ]}
+)
+
+
+
+
+

Session insights

+

Sessions are useful for insights about short term user behavior. +- Monitor for how long a user chats with your LLM app before disconnecting +- Compute the average number of messages per session +- Discover what kind of messages ends a session.

+

Users

+

Find out how specific users interact with your LLM app by logging the user id.

+

To do so, attach tasks and sessions to a user_id when logging. The user id can be any string.

+
+
+
+
phospho.log(
+    input="What is the meaning of life?",
+    output="42",
+    user_id="roger@gmail.com",
+)
+
+
+
+
phospho.log({
+    input: "What is the meaning of life?",
+    output: "42",
+    user_id: "roger@gmail.com",
+});
+
+
+
+
+

User analytics are available in the tabs Insights/Users. +- Discover aggregated metrics (number of tasks, average session duration, etc.) +- Access the tasks and sessions of a user by clicking on the corresponding row.

+

Monitoring users helps you discover power users of your app, abusive users, or users who are struggling with your LLM app.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/tagging/index.html b/analytics/tagging/index.html new file mode 100644 index 0000000..a081ec1 --- /dev/null +++ b/analytics/tagging/index.html @@ -0,0 +1,2620 @@ + + + + + + + + + + + + + + + + + + + + + + + + Automatic tagging - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Automatic tagging

+ +

How are tags detected?

+

Every message logged to phospho goes through an analytics pipeline. In this pipeline, phospho looks for tags defined in your project settings.

+

Tags are described in natural language. Create tags to detect topics, hallucinations, behaviours, intents, or any other concept you want to track.

+

Tags are displayed on the platform and you can use them to filter data.

+

Be notified when a tag is detected with webhooks.

+

Example of tags

+
    +
  • The user is trying to book a flight
  • +
  • The user thanked the agent for its help
  • +
  • The user is asking for a refund
  • +
  • The user bought a product
  • +
  • The assistant responded something that could be considered financial advice
  • +
  • The assistant talked as if he was a customer, and not a support
  • +
+

Create tags

+

Go to the Analytics tab of the phospho dashboard, and click Add Tagger on the right.

+

You will find some event templates like Coherence and Plausibility to get you started.

+

Events tab

+

Tag definition

+

The event description is a natural language description of the tag. Explain how to detect the tag in an interaction as if you were explaining it to a 5 years old or an alien.

+

In the description, refer to your user as "the user" and to your LLM app as "the assistant".

+
+

Example of an event description

+
+

The user is trying to book a flight. The user asked a question about a flight. +Don't include fight suggestions from the agent if the user didn't ask for it.

+
+
+

Manage Tags in the Analytics tab. Click delete to delete a tag detector.

+

Tag suggestion

+

Note that you can also use the magic wand button on any session to get a suggestion for a possible tag that has been detected in the session.

+

Tag suggestion

+

The button is right next to "Events" in the Session tab.

+

Webhooks

+

Add an optional webhook to be notified when an event is detected. Click on Additional settings to add the webhook URL and the eventual Authorization header.

+

What is a webhook?

+

Webhooks are automated messages sent from apps when something happens. They have a payload and are sent to a unique URL, which is like an app's phone number or address.

+

If you have an LLM app with a backend, you can create webhooks.

+

How to use the webhook?

+

Every time the event is detected, phospho will send a POST request to the webhook with this payload:

+
{
+    "id": "xxxxxxxxx", // Unique identifier of the detected event
+    "created_at": 13289238198, // Unix timestamp (in seconds)
+    "event_name": "privacy_policy", // The name of the event, as written in the dashboard
+    "task_id": "xxxxxxx", // The task id where the event was detected
+    "session_id": "xxxxxxx", // The session id where the event was detected
+    "project_id": "xxxxxxx", // The project id where the event was detected
+    "org_id": "xxxxxxx", // The organization id where the event was detected
+    "webhook": "https://your-webhook-url.com", // The webhook URL
+    "source": "phospho-unknown", // Starts with phospho if detected by phospho
+}
+
+

Retrieve the messages using the task_id and the phospho API.

+

Examples

+

Use webhooks to send slack notifications, emails, SMS, notifications, UI updates, or to trigger a function in your backend.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/usage-based-billing/index.html b/analytics/usage-based-billing/index.html new file mode 100644 index 0000000..161d19a --- /dev/null +++ b/analytics/usage-based-billing/index.html @@ -0,0 +1,2487 @@ + + + + + + + + + + + + + + + + + + + + + + + + Usage-based billing - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Usage-based billing

+ +

This documents documents the usage based billing plan of the hosted phospho platform.

+

What is usage-based billing?

+

Every analytics run on phospho consumes a certain amount of credits.

+

At the end of the month, the total credits consumed by all the analytics runs are calculated and the user is billed based on the total credits consumed.

+

The cost per credit depends on the plan you are on.

+

How many credits does an analytics run consume?

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Analytics runCredits consumed
Logging 1 Task0
Event detection on 1 Task: Tagger1
Event detection on 1 Task: Scorer1
Event detection on 1 Task: Classifier1
Clustering on 1 Task2
Event detection on 1 Session: Tagger1 * number of tasks in the session
Event detection on 1 Session: Scorer1 * number of tasks in the session
Event detection on 1 Session: Classifier1 * number of tasks in the session
Clustering on 1 Session2 * number of tasks in the session
Language detection on 1 Task1
Sentiment detection on 1 Task1
+

How to optimize credit consumption?

+
    +
  • Instead of using multiple taggers, use a single classifier
  • +
  • Filter the scope of clustering to only the required tasks
  • +
  • Disable unnecessary analytics in Project settings
  • +
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/analytics/user-feedback/index.html b/analytics/user-feedback/index.html new file mode 100644 index 0000000..4bab9dc --- /dev/null +++ b/analytics/user-feedback/index.html @@ -0,0 +1,2707 @@ + + + + + + + + + + + + + + + + + + + + + + + + User Feedback - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

User Feedback

+ +

Logging user feedback is a crucial part of evaluating an LLM app. Even though user feedback is subjective and biased towards negative, it is a valuable source of information to improve the quality of your app.

+

Setup user feedback in your app to log the user feedback to phospho, review it in the webapp, improve the automatic evaluations, and make your app better.

+

Architecture: what's the task_id?

+

In your app, you should collect user feedback after having logged a task to phospho. Every task logged to phospho is identified by a unique task_id.

+

For phospho to know what task the user is giving feedback on, you need to keep track of the task_id.

+

There are two ways to manage the task_id: frontend or backend.

+

Any way you chose, there are helpers in the phospho package to make it easier.

+

Option 1: Task id managed by Frontend

+
    +
  1. In your frontend, you generate a task id using UUID V4
  2. +
  3. You pass this task id to your backend. The backend executes the task and log the task to phospho with this task id.
  4. +
  5. In your frontend, you collect user feedback based on this task id.
  6. +
+

Option 2: Task id managed by Backend

+
    +
  1. In your frontend, you ask your backend to execute a task.
  2. +
  3. The backend generates a task id using UUID V4, and logs the task to phospho with this task id.
  4. +
  5. The backend returns the task id to the frontend.
  6. +
  7. In your frontend, you collect user feedback based on this task id.
  8. +
+

Backend: Log to phospho with a known task_id

+
+
+
+

The phospho package provides multiple helpers to manage the task_id.

+
pip install phospho
+
+

Make sure you have initialized the phospho package with your project_id and api_key somewhere in your app.

+
import phospho
+phospho.init(project_id="your_project_id", api_key="your_api_key")
+
+

You can fetch the task_id generated by phospho.log:

+
logged_content = phospho.log(input="question", output="answer")
+task_id: str = logged_content["task_id"]
+
+

To generate a new task_id, you can use the new_task function.

+
task_id: str = phospho.new_task()
+
+# Pass it to phospho.log to create a task with this id
+phospho.log(input="question", output="answer", task_id=task_id)
+
+

To get the latest task_id, you can use the latest_task_id variable.

+
latest_task_id = phospho.latest_task_id
+
+
+
+

The phospho package provides multiple helpers to manage the task_id.

+
npm install phospho
+
+

Make sure you have initialized the phospho package with your project_id and api_key somewhere in your app.

+
import { phospho } from "phospho";
+phospho.init({ projectId: "your_project_id", apiKey: "your_api_key" });
+
+

You can fetch the task_id generated by phospho.log:

+
const loggedContent = await phospho.log({
+  input: "question",
+  output: "answer",
+});
+const taskId: string = loggedContent.task_id;
+
+

The task_id from the loggedContent is in snake_case.

+

To generate a new task_id, you can use the newTask function.

+
const taskId = phospho.newTask();
+
+// Pass it to phospho.log to create a task with this id
+phospho.log({ input: "question", output: "answer", taskId: taskId });
+
+

To get the latest task_id, you can use the latestTaskId variable.

+
const latestTaskId = phospho.latestTaskId;
+
+
+
+

When using the API directly, you need to manage the task_id by yourself.

+

Create a task_id by generating a string hash. It needs to be unique for each task.

+
TASK_ID=$(uuidgen)
+
+

Pass this task_id to the log endpoint.

+
curl -X POST https://api.phospho.ai/v2/log/$PHOSPHO_PROJECT_ID \
+-H "Authorization: Bearer $PHOSPHO_API_KEY" \
+-H "Content-Type: application/json" \
+-d '{
+    "batched_log_events": [
+        {
+            "input": "your_input",
+            "output": "your_output",
+            "task_id": "$TASK_ID"
+        }
+    ]
+}'
+
+
+
+
+

Frontend: Collect user feedback

+

Once your backend has executed the task and logged it to phospho with a known task_id, send the task_id back to your frontend.

+

In your frontend, using the task_id, you can collect user feedback and send it to phospho.

+
+
+
+

We provide React components to kickstart your user feedback collection in your app.

+
npm install phospho-ui-react
+
+
import "./App.css";
+import { FeedbackDrawer, Feedback } from "phospho-ui-react";
+import "phospho-ui-react/dist/index.css";
+
+function App() {
+  return (
+    <div className="App">
+      <header className="App-header">
+        <FeedbackDrawer
+          // Get your project_id on phospho
+          projectId="..."
+          // The task_id logged to phospho. Fetch it from your backend after logging
+          taskId="..."
+          // Source will be also logged to phospho
+          source={"user_feedback"}
+          // Customize the drawer
+          title="Send Feedback"
+          description="Help us improve our product."
+          onSubmit={(feedback: Feedback) =>
+            console.log("Submitted: ", feedback)
+          }
+          onClose={(feedback: Feedback) => console.log("Closed: ", feedback)}
+        />
+      </header>
+    </div>
+  );
+}
+
+export default App;
+
+
+
+

In the browser, use the sendUserFeedback function. This function doesn't need your phospho api key. This is done to avoid leaking your phospho API key. However, this function still requires the projectId.

+

Here is how to use the sendUserFeedback function.

+
import { sendUserFeedback } from "phospho";
+
+// Handle logging in your backend, and send the task_id to the browser
+const taskId = await fetch("https://your-backend.com/some-endpoint", {
+  method: "POST",
+  headers: {
+    "Content-Type": "application/json",
+  },
+  body: JSON.stringify({
+    your: "stuff",
+  }),
+})
+  .then((res) => res.json())
+  .then((data) => data.task_id);
+
+// When you collect feedback, send it to phospho
+// For example, when the user clicks on a button
+sendUserFeedback({
+  projectId: "your_project_id",
+  tastId: taskId,
+  flag: "success", // or "failure"
+  source: "user",
+  notes: "Some notes (can be None)",
+});
+
+
+
+

If you are using a different language or a different way to manage the frontend, you can use the API endpoint tasks/{task-id}/flag directly.

+

This endpoint is public. You only need to pass the task_id and project_id. This is done to avoid leaking your phospho API key.

+
curl -X POST https://api.phospho.ai/v2/tasks/$TASK_ID/flag \
+-H "Content-Type: application/json" \
+-d '{
+    "project_id": "$PHOSPHO_PROJECT_ID",
+    "flag": "success",
+    "flag_source": "user"
+    "notes": "This is what the user said about this task"
+}'
+
+
+
+
+

Backend: Manage user feedback collection

+

If you don't want to collect user feedback in the frontend, you can instead create an endpoint in your backend and collect user feedback there.

+
+
+
+

The phospho python package provides a user_feedback function to log user feedback.

+
# See the previous section to get the task_id
+task_id = ...
+
+phospho.user_feedback(
+    task_id=task_id,
+    flag="success", # or "failure"
+    source="user",
+    notes="Some notes (can be None)", # optional
+)
+
+
+
+

The phospho javascript module provides a userFeedback function to log user feedback.

+
const taskId = ... // See the previous section to get the task_id
+
+phospho.userFeedback({
+  tastId: taskId,
+  flag: "success", // or "failure"
+  flagSource: "user",
+  notes: "Some notes (can be None)",
+});
+
+
+
+

You can use the API endpoint tasks/{task-id}/flag directly.

+
curl -X POST https://api.phospho.ai/v2/tasks/$TASK_ID/flag \
+-H "Authorization: Bearer $PHOSPHO_API_KEY" \
+-H "Content-Type: application/json" \
+-d '{
+    "flag": "success",
+    "flag_source": "user"
+    "notes": "This is what the user said about this task"
+}'
+
+
+
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/api-reference/introduction/index.html b/api-reference/introduction/index.html new file mode 100644 index 0000000..d255ae1 --- /dev/null +++ b/api-reference/introduction/index.html @@ -0,0 +1,2395 @@ + + + + + + + + + + + + + + + + + + + + + + + + Getting started - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Getting started

+ +

Most phospho features are available through the API. The base URL of the phospho API is https://api.phospho.ai/v3.

+

If you do not want to use the API directly, we provide several SDKs to make it easier to integrate phospho into your products:

+ +

The API full reference is available here

+

Dedicated endpoints

+

Contact us at contact@phospho.ai to discuss integrating phospho into your products through dedicated endpoints, allowing seamless, behind-the-scenes functionality for your customers.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 0000000..1cf13b9 Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.f55a23d4.min.js b/assets/javascripts/bundle.f55a23d4.min.js new file mode 100644 index 0000000..01a46ad --- /dev/null +++ b/assets/javascripts/bundle.f55a23d4.min.js @@ -0,0 +1,16 @@ +"use strict";(()=>{var Wi=Object.create;var gr=Object.defineProperty;var Vi=Object.getOwnPropertyDescriptor;var Di=Object.getOwnPropertyNames,Vt=Object.getOwnPropertySymbols,zi=Object.getPrototypeOf,yr=Object.prototype.hasOwnProperty,ao=Object.prototype.propertyIsEnumerable;var io=(e,t,r)=>t in e?gr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,$=(e,t)=>{for(var r in t||(t={}))yr.call(t,r)&&io(e,r,t[r]);if(Vt)for(var r of Vt(t))ao.call(t,r)&&io(e,r,t[r]);return e};var so=(e,t)=>{var r={};for(var o in e)yr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&Vt)for(var o of Vt(e))t.indexOf(o)<0&&ao.call(e,o)&&(r[o]=e[o]);return r};var xr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Ni=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of Di(t))!yr.call(e,n)&&n!==r&&gr(e,n,{get:()=>t[n],enumerable:!(o=Vi(t,n))||o.enumerable});return e};var Lt=(e,t,r)=>(r=e!=null?Wi(zi(e)):{},Ni(t||!e||!e.__esModule?gr(r,"default",{value:e,enumerable:!0}):r,e));var co=(e,t,r)=>new Promise((o,n)=>{var i=p=>{try{s(r.next(p))}catch(c){n(c)}},a=p=>{try{s(r.throw(p))}catch(c){n(c)}},s=p=>p.done?o(p.value):Promise.resolve(p.value).then(i,a);s((r=r.apply(e,t)).next())});var lo=xr((Er,po)=>{(function(e,t){typeof Er=="object"&&typeof po!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(Er,(function(){"use strict";function e(r){var o=!0,n=!1,i=null,a={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function s(k){return!!(k&&k!==document&&k.nodeName!=="HTML"&&k.nodeName!=="BODY"&&"classList"in k&&"contains"in k.classList)}function p(k){var ft=k.type,qe=k.tagName;return!!(qe==="INPUT"&&a[ft]&&!k.readOnly||qe==="TEXTAREA"&&!k.readOnly||k.isContentEditable)}function c(k){k.classList.contains("focus-visible")||(k.classList.add("focus-visible"),k.setAttribute("data-focus-visible-added",""))}function l(k){k.hasAttribute("data-focus-visible-added")&&(k.classList.remove("focus-visible"),k.removeAttribute("data-focus-visible-added"))}function f(k){k.metaKey||k.altKey||k.ctrlKey||(s(r.activeElement)&&c(r.activeElement),o=!0)}function u(k){o=!1}function d(k){s(k.target)&&(o||p(k.target))&&c(k.target)}function y(k){s(k.target)&&(k.target.classList.contains("focus-visible")||k.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(k.target))}function L(k){document.visibilityState==="hidden"&&(n&&(o=!0),X())}function X(){document.addEventListener("mousemove",J),document.addEventListener("mousedown",J),document.addEventListener("mouseup",J),document.addEventListener("pointermove",J),document.addEventListener("pointerdown",J),document.addEventListener("pointerup",J),document.addEventListener("touchmove",J),document.addEventListener("touchstart",J),document.addEventListener("touchend",J)}function ee(){document.removeEventListener("mousemove",J),document.removeEventListener("mousedown",J),document.removeEventListener("mouseup",J),document.removeEventListener("pointermove",J),document.removeEventListener("pointerdown",J),document.removeEventListener("pointerup",J),document.removeEventListener("touchmove",J),document.removeEventListener("touchstart",J),document.removeEventListener("touchend",J)}function J(k){k.target.nodeName&&k.target.nodeName.toLowerCase()==="html"||(o=!1,ee())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",L,!0),X(),r.addEventListener("focus",d,!0),r.addEventListener("blur",y,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)}))});var qr=xr((dy,On)=>{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var $a=/["'&<>]/;On.exports=Pa;function Pa(e){var t=""+e,r=$a.exec(t);if(!r)return t;var o,n="",i=0,a=0;for(i=r.index;i{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT ยฉ Zeno Rocha + */(function(t,r){typeof Rt=="object"&&typeof Yr=="object"?Yr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Rt=="object"?Rt.ClipboardJS=r():t.ClipboardJS=r()})(Rt,function(){return(function(){var e={686:(function(o,n,i){"use strict";i.d(n,{default:function(){return Ui}});var a=i(279),s=i.n(a),p=i(370),c=i.n(p),l=i(817),f=i.n(l);function u(D){try{return document.execCommand(D)}catch(A){return!1}}var d=function(A){var M=f()(A);return u("cut"),M},y=d;function L(D){var A=document.documentElement.getAttribute("dir")==="rtl",M=document.createElement("textarea");M.style.fontSize="12pt",M.style.border="0",M.style.padding="0",M.style.margin="0",M.style.position="absolute",M.style[A?"right":"left"]="-9999px";var F=window.pageYOffset||document.documentElement.scrollTop;return M.style.top="".concat(F,"px"),M.setAttribute("readonly",""),M.value=D,M}var X=function(A,M){var F=L(A);M.container.appendChild(F);var V=f()(F);return u("copy"),F.remove(),V},ee=function(A){var M=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},F="";return typeof A=="string"?F=X(A,M):A instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(A==null?void 0:A.type)?F=X(A.value,M):(F=f()(A),u("copy")),F},J=ee;function k(D){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?k=function(M){return typeof M}:k=function(M){return M&&typeof Symbol=="function"&&M.constructor===Symbol&&M!==Symbol.prototype?"symbol":typeof M},k(D)}var ft=function(){var A=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},M=A.action,F=M===void 0?"copy":M,V=A.container,Y=A.target,$e=A.text;if(F!=="copy"&&F!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Y!==void 0)if(Y&&k(Y)==="object"&&Y.nodeType===1){if(F==="copy"&&Y.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(F==="cut"&&(Y.hasAttribute("readonly")||Y.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if($e)return J($e,{container:V});if(Y)return F==="cut"?y(Y):J(Y,{container:V})},qe=ft;function Fe(D){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Fe=function(M){return typeof M}:Fe=function(M){return M&&typeof Symbol=="function"&&M.constructor===Symbol&&M!==Symbol.prototype?"symbol":typeof M},Fe(D)}function ki(D,A){if(!(D instanceof A))throw new TypeError("Cannot call a class as a function")}function no(D,A){for(var M=0;M0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof V.action=="function"?V.action:this.defaultAction,this.target=typeof V.target=="function"?V.target:this.defaultTarget,this.text=typeof V.text=="function"?V.text:this.defaultText,this.container=Fe(V.container)==="object"?V.container:document.body}},{key:"listenClick",value:function(V){var Y=this;this.listener=c()(V,"click",function($e){return Y.onClick($e)})}},{key:"onClick",value:function(V){var Y=V.delegateTarget||V.currentTarget,$e=this.action(Y)||"copy",Wt=qe({action:$e,container:this.container,target:this.target(Y),text:this.text(Y)});this.emit(Wt?"success":"error",{action:$e,text:Wt,trigger:Y,clearSelection:function(){Y&&Y.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(V){return vr("action",V)}},{key:"defaultTarget",value:function(V){var Y=vr("target",V);if(Y)return document.querySelector(Y)}},{key:"defaultText",value:function(V){return vr("text",V)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(V){var Y=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return J(V,Y)}},{key:"cut",value:function(V){return y(V)}},{key:"isSupported",value:function(){var V=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Y=typeof V=="string"?[V]:V,$e=!!document.queryCommandSupported;return Y.forEach(function(Wt){$e=$e&&!!document.queryCommandSupported(Wt)}),$e}}]),M})(s()),Ui=Fi}),828:(function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function a(s,p){for(;s&&s.nodeType!==n;){if(typeof s.matches=="function"&&s.matches(p))return s;s=s.parentNode}}o.exports=a}),438:(function(o,n,i){var a=i(828);function s(l,f,u,d,y){var L=c.apply(this,arguments);return l.addEventListener(u,L,y),{destroy:function(){l.removeEventListener(u,L,y)}}}function p(l,f,u,d,y){return typeof l.addEventListener=="function"?s.apply(null,arguments):typeof u=="function"?s.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(L){return s(L,f,u,d,y)}))}function c(l,f,u,d){return function(y){y.delegateTarget=a(y.target,f),y.delegateTarget&&d.call(l,y)}}o.exports=p}),879:(function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var a=Object.prototype.toString.call(i);return i!==void 0&&(a==="[object NodeList]"||a==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var a=Object.prototype.toString.call(i);return a==="[object Function]"}}),370:(function(o,n,i){var a=i(879),s=i(438);function p(u,d,y){if(!u&&!d&&!y)throw new Error("Missing required arguments");if(!a.string(d))throw new TypeError("Second argument must be a String");if(!a.fn(y))throw new TypeError("Third argument must be a Function");if(a.node(u))return c(u,d,y);if(a.nodeList(u))return l(u,d,y);if(a.string(u))return f(u,d,y);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function c(u,d,y){return u.addEventListener(d,y),{destroy:function(){u.removeEventListener(d,y)}}}function l(u,d,y){return Array.prototype.forEach.call(u,function(L){L.addEventListener(d,y)}),{destroy:function(){Array.prototype.forEach.call(u,function(L){L.removeEventListener(d,y)})}}}function f(u,d,y){return s(document.body,u,d,y)}o.exports=p}),817:(function(o){function n(i){var a;if(i.nodeName==="SELECT")i.focus(),a=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var s=i.hasAttribute("readonly");s||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),s||i.removeAttribute("readonly"),a=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var p=window.getSelection(),c=document.createRange();c.selectNodeContents(i),p.removeAllRanges(),p.addRange(c),a=p.toString()}return a}o.exports=n}),279:(function(o){function n(){}n.prototype={on:function(i,a,s){var p=this.e||(this.e={});return(p[i]||(p[i]=[])).push({fn:a,ctx:s}),this},once:function(i,a,s){var p=this;function c(){p.off(i,c),a.apply(s,arguments)}return c._=a,this.on(i,c,s)},emit:function(i){var a=[].slice.call(arguments,1),s=((this.e||(this.e={}))[i]||[]).slice(),p=0,c=s.length;for(p;p0&&i[i.length-1])&&(c[0]===6||c[0]===2)){r=0;continue}if(c[0]===3&&(!i||c[1]>i[0]&&c[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function z(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],a;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(s){a={error:s}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(a)throw a.error}}return i}function q(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||p(d,L)})},y&&(n[d]=y(n[d])))}function p(d,y){try{c(o[d](y))}catch(L){u(i[0][3],L)}}function c(d){d.value instanceof nt?Promise.resolve(d.value.v).then(l,f):u(i[0][2],d)}function l(d){p("next",d)}function f(d){p("throw",d)}function u(d,y){d(y),i.shift(),i.length&&p(i[0][0],i[0][1])}}function uo(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof he=="function"?he(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(a){return new Promise(function(s,p){a=e[i](a),n(s,p,a.done,a.value)})}}function n(i,a,s,p){Promise.resolve(p).then(function(c){i({value:c,done:s})},a)}}function H(e){return typeof e=="function"}function ut(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var zt=ut(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function Qe(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ue=(function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var a=this._parentage;if(a)if(this._parentage=null,Array.isArray(a))try{for(var s=he(a),p=s.next();!p.done;p=s.next()){var c=p.value;c.remove(this)}}catch(L){t={error:L}}finally{try{p&&!p.done&&(r=s.return)&&r.call(s)}finally{if(t)throw t.error}}else a.remove(this);var l=this.initialTeardown;if(H(l))try{l()}catch(L){i=L instanceof zt?L.errors:[L]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=he(f),d=u.next();!d.done;d=u.next()){var y=d.value;try{ho(y)}catch(L){i=i!=null?i:[],L instanceof zt?i=q(q([],z(i)),z(L.errors)):i.push(L)}}}catch(L){o={error:L}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new zt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ho(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&Qe(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&Qe(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=(function(){var t=new e;return t.closed=!0,t})(),e})();var Tr=Ue.EMPTY;function Nt(e){return e instanceof Ue||e&&"closed"in e&&H(e.remove)&&H(e.add)&&H(e.unsubscribe)}function ho(e){H(e)?e():e.unsubscribe()}var Pe={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var dt={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,a=n.isStopped,s=n.observers;return i||a?Tr:(this.currentObservers=null,s.push(r),new Ue(function(){o.currentObservers=null,Qe(s,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,a=o.isStopped;n?r.error(i):a&&r.complete()},t.prototype.asObservable=function(){var r=new j;return r.source=this,r},t.create=function(r,o){return new To(r,o)},t})(j);var To=(function(e){oe(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:Tr},t})(g);var _r=(function(e){oe(t,e);function t(r){var o=e.call(this)||this;return o._value=r,o}return Object.defineProperty(t.prototype,"value",{get:function(){return this.getValue()},enumerable:!1,configurable:!0}),t.prototype._subscribe=function(r){var o=e.prototype._subscribe.call(this,r);return!o.closed&&r.next(this._value),o},t.prototype.getValue=function(){var r=this,o=r.hasError,n=r.thrownError,i=r._value;if(o)throw n;return this._throwIfClosed(),i},t.prototype.next=function(r){e.prototype.next.call(this,this._value=r)},t})(g);var _t={now:function(){return(_t.delegate||Date).now()},delegate:void 0};var At=(function(e){oe(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=_t);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,a=o._infiniteTimeWindow,s=o._timestampProvider,p=o._windowTime;n||(i.push(r),!a&&i.push(s.now()+p)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,a=n._buffer,s=a.slice(),p=0;p0?e.prototype.schedule.call(this,r,o):(this.delay=o,this.state=r,this.scheduler.flush(this),this)},t.prototype.execute=function(r,o){return o>0||this.closed?e.prototype.execute.call(this,r,o):this._execute(r,o)},t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!=null&&n>0||n==null&&this.delay>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.flush(this),0)},t})(gt);var Lo=(function(e){oe(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t})(yt);var kr=new Lo(Oo);var Mo=(function(e){oe(t,e);function t(r,o){var n=e.call(this,r,o)||this;return n.scheduler=r,n.work=o,n}return t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!==null&&n>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=vt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var a=r.actions;o!=null&&o===r._scheduled&&((i=a[a.length-1])===null||i===void 0?void 0:i.id)!==o&&(vt.cancelAnimationFrame(o),r._scheduled=void 0)},t})(gt);var _o=(function(e){oe(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o;r?o=r.id:(o=this._scheduled,this._scheduled=void 0);var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t})(yt);var me=new _o(Mo);var S=new j(function(e){return e.complete()});function Kt(e){return e&&H(e.schedule)}function Hr(e){return e[e.length-1]}function Xe(e){return H(Hr(e))?e.pop():void 0}function ke(e){return Kt(Hr(e))?e.pop():void 0}function Yt(e,t){return typeof Hr(e)=="number"?e.pop():t}var xt=(function(e){return e&&typeof e.length=="number"&&typeof e!="function"});function Bt(e){return H(e==null?void 0:e.then)}function Gt(e){return H(e[bt])}function Jt(e){return Symbol.asyncIterator&&H(e==null?void 0:e[Symbol.asyncIterator])}function Xt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Zi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Zt=Zi();function er(e){return H(e==null?void 0:e[Zt])}function tr(e){return fo(this,arguments,function(){var r,o,n,i;return Dt(this,function(a){switch(a.label){case 0:r=e.getReader(),a.label=1;case 1:a.trys.push([1,,9,10]),a.label=2;case 2:return[4,nt(r.read())];case 3:return o=a.sent(),n=o.value,i=o.done,i?[4,nt(void 0)]:[3,5];case 4:return[2,a.sent()];case 5:return[4,nt(n)];case 6:return[4,a.sent()];case 7:return a.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function rr(e){return H(e==null?void 0:e.getReader)}function U(e){if(e instanceof j)return e;if(e!=null){if(Gt(e))return ea(e);if(xt(e))return ta(e);if(Bt(e))return ra(e);if(Jt(e))return Ao(e);if(er(e))return oa(e);if(rr(e))return na(e)}throw Xt(e)}function ea(e){return new j(function(t){var r=e[bt]();if(H(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function ta(e){return new j(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?b(function(n,i){return e(n,i,o)}):le,Te(1),r?Ve(t):Qo(function(){return new nr}))}}function jr(e){return e<=0?function(){return S}:E(function(t,r){var o=[];t.subscribe(T(r,function(n){o.push(n),e=2,!0))}function pe(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new g}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,a=i===void 0?!0:i,s=e.resetOnRefCountZero,p=s===void 0?!0:s;return function(c){var l,f,u,d=0,y=!1,L=!1,X=function(){f==null||f.unsubscribe(),f=void 0},ee=function(){X(),l=u=void 0,y=L=!1},J=function(){var k=l;ee(),k==null||k.unsubscribe()};return E(function(k,ft){d++,!L&&!y&&X();var qe=u=u!=null?u:r();ft.add(function(){d--,d===0&&!L&&!y&&(f=Ur(J,p))}),qe.subscribe(ft),!l&&d>0&&(l=new at({next:function(Fe){return qe.next(Fe)},error:function(Fe){L=!0,X(),f=Ur(ee,n,Fe),qe.error(Fe)},complete:function(){y=!0,X(),f=Ur(ee,a),qe.complete()}}),U(k).subscribe(l))})(c)}}function Ur(e,t){for(var r=[],o=2;oe.next(document)),e}function P(e,t=document){return Array.from(t.querySelectorAll(e))}function R(e,t=document){let r=fe(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function fe(e,t=document){return t.querySelector(e)||void 0}function Ie(){var e,t,r,o;return(o=(r=(t=(e=document.activeElement)==null?void 0:e.shadowRoot)==null?void 0:t.activeElement)!=null?r:document.activeElement)!=null?o:void 0}var wa=O(h(document.body,"focusin"),h(document.body,"focusout")).pipe(_e(1),Q(void 0),m(()=>Ie()||document.body),G(1));function et(e){return wa.pipe(m(t=>e.contains(t)),K())}function Ht(e,t){return C(()=>O(h(e,"mouseenter").pipe(m(()=>!0)),h(e,"mouseleave").pipe(m(()=>!1))).pipe(t?kt(r=>Le(+!r*t)):le,Q(e.matches(":hover"))))}function Jo(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Jo(e,r)}function x(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Jo(o,n);return o}function sr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function wt(e){let t=x("script",{src:e});return C(()=>(document.head.appendChild(t),O(h(t,"load"),h(t,"error").pipe(v(()=>$r(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),_(()=>document.head.removeChild(t)),Te(1))))}var Xo=new g,Ta=C(()=>typeof ResizeObserver=="undefined"?wt("https://unpkg.com/resize-observer-polyfill"):I(void 0)).pipe(m(()=>new ResizeObserver(e=>e.forEach(t=>Xo.next(t)))),v(e=>O(Ye,I(e)).pipe(_(()=>e.disconnect()))),G(1));function ce(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ge(e){let t=e;for(;t.clientWidth===0&&t.parentElement;)t=t.parentElement;return Ta.pipe(w(r=>r.observe(t)),v(r=>Xo.pipe(b(o=>o.target===t),_(()=>r.unobserve(t)))),m(()=>ce(e)),Q(ce(e)))}function Tt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function cr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}function Zo(e){let t=[],r=e.parentElement;for(;r;)(e.clientWidth>r.clientWidth||e.clientHeight>r.clientHeight)&&t.push(r),r=(e=r).parentElement;return t.length===0&&t.push(document.documentElement),t}function De(e){return{x:e.offsetLeft,y:e.offsetTop}}function en(e){let t=e.getBoundingClientRect();return{x:t.x+window.scrollX,y:t.y+window.scrollY}}function tn(e){return O(h(window,"load"),h(window,"resize")).pipe(Me(0,me),m(()=>De(e)),Q(De(e)))}function pr(e){return{x:e.scrollLeft,y:e.scrollTop}}function ze(e){return O(h(e,"scroll"),h(window,"scroll"),h(window,"resize")).pipe(Me(0,me),m(()=>pr(e)),Q(pr(e)))}var rn=new g,Sa=C(()=>I(new IntersectionObserver(e=>{for(let t of e)rn.next(t)},{threshold:0}))).pipe(v(e=>O(Ye,I(e)).pipe(_(()=>e.disconnect()))),G(1));function tt(e){return Sa.pipe(w(t=>t.observe(e)),v(t=>rn.pipe(b(({target:r})=>r===e),_(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function on(e,t=16){return ze(e).pipe(m(({y:r})=>{let o=ce(e),n=Tt(e);return r>=n.height-o.height-t}),K())}var lr={drawer:R("[data-md-toggle=drawer]"),search:R("[data-md-toggle=search]")};function nn(e){return lr[e].checked}function Je(e,t){lr[e].checked!==t&&lr[e].click()}function Ne(e){let t=lr[e];return h(t,"change").pipe(m(()=>t.checked),Q(t.checked))}function Oa(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function La(){return O(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(Q(!1))}function an(){let e=h(window,"keydown").pipe(b(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:nn("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),b(({mode:t,type:r})=>{if(t==="global"){let o=Ie();if(typeof o!="undefined")return!Oa(o,r)}return!0}),pe());return La().pipe(v(t=>t?S:e))}function ye(){return new URL(location.href)}function lt(e,t=!1){if(B("navigation.instant")&&!t){let r=x("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function sn(){return new g}function cn(){return location.hash.slice(1)}function pn(e){let t=x("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Ma(e){return O(h(window,"hashchange"),e).pipe(m(cn),Q(cn()),b(t=>t.length>0),G(1))}function ln(e){return Ma(e).pipe(m(t=>fe(`[id="${t}"]`)),b(t=>typeof t!="undefined"))}function $t(e){let t=matchMedia(e);return ir(r=>t.addListener(()=>r(t.matches))).pipe(Q(t.matches))}function mn(){let e=matchMedia("print");return O(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(Q(e.matches))}function zr(e,t){return e.pipe(v(r=>r?t():S))}function Nr(e,t){return new j(r=>{let o=new XMLHttpRequest;return o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network error"))}),o.addEventListener("abort",()=>{r.complete()}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{var i;if(n.lengthComputable)t.progress$.next(n.loaded/n.total*100);else{let a=(i=o.getResponseHeader("Content-Length"))!=null?i:0;t.progress$.next(n.loaded/+a*100)}}),t.progress$.next(5)),o.send(),()=>o.abort()})}function je(e,t){return Nr(e,t).pipe(v(r=>r.text()),m(r=>JSON.parse(r)),G(1))}function fn(e,t){let r=new DOMParser;return Nr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/html")),G(1))}function un(e,t){let r=new DOMParser;return Nr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),G(1))}function dn(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function hn(){return O(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(dn),Q(dn()))}function bn(){return{width:innerWidth,height:innerHeight}}function vn(){return h(window,"resize",{passive:!0}).pipe(m(bn),Q(bn()))}function gn(){return N([hn(),vn()]).pipe(m(([e,t])=>({offset:e,size:t})),G(1))}function mr(e,{viewport$:t,header$:r}){let o=t.pipe(te("size")),n=N([o,r]).pipe(m(()=>De(e)));return N([r,t,n]).pipe(m(([{height:i},{offset:a,size:s},{x:p,y:c}])=>({offset:{x:a.x-p,y:a.y-c+i},size:s})))}function _a(e){return h(e,"message",t=>t.data)}function Aa(e){let t=new g;return t.subscribe(r=>e.postMessage(r)),t}function yn(e,t=new Worker(e)){let r=_a(t),o=Aa(t),n=new g;n.subscribe(o);let i=o.pipe(Z(),ie(!0));return n.pipe(Z(),Re(r.pipe(W(i))),pe())}var Ca=R("#__config"),St=JSON.parse(Ca.textContent);St.base=`${new URL(St.base,ye())}`;function xe(){return St}function B(e){return St.features.includes(e)}function Ee(e,t){return typeof t!="undefined"?St.translations[e].replace("#",t.toString()):St.translations[e]}function Se(e,t=document){return R(`[data-md-component=${e}]`,t)}function ae(e,t=document){return P(`[data-md-component=${e}]`,t)}function ka(e){let t=R(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>R(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function xn(e){if(!B("announce.dismiss")||!e.childElementCount)return S;if(!e.hidden){let t=R(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return C(()=>{let t=new g;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),ka(e).pipe(w(r=>t.next(r)),_(()=>t.complete()),m(r=>$({ref:e},r)))})}function Ha(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function En(e,t){let r=new g;return r.subscribe(({hidden:o})=>{e.hidden=o}),Ha(e,t).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))}function Pt(e,t){return t==="inline"?x("div",{class:"md-tooltip md-tooltip--inline",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"})):x("div",{class:"md-tooltip",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"}))}function wn(...e){return x("div",{class:"md-tooltip2",role:"tooltip"},x("div",{class:"md-tooltip2__inner md-typeset"},e))}function Tn(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return x("aside",{class:"md-annotation",tabIndex:0},Pt(t),x("a",{href:r,class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}else return x("aside",{class:"md-annotation",tabIndex:0},Pt(t),x("span",{class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}function Sn(e){return x("button",{class:"md-clipboard md-icon",title:Ee("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}var Ln=Lt(qr());function Qr(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(p=>!e.terms[p]).reduce((p,c)=>[...p,x("del",null,(0,Ln.default)(c))," "],[]).slice(0,-1),i=xe(),a=new URL(e.location,i.base);B("search.highlight")&&a.searchParams.set("h",Object.entries(e.terms).filter(([,p])=>p).reduce((p,[c])=>`${p} ${c}`.trim(),""));let{tags:s}=xe();return x("a",{href:`${a}`,class:"md-search-result__link",tabIndex:-1},x("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&x("div",{class:"md-search-result__icon md-icon"}),r>0&&x("h1",null,e.title),r<=0&&x("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&x("nav",{class:"md-tags"},e.tags.map(p=>{let c=s?p in s?`md-tag-icon md-tag--${s[p]}`:"md-tag-icon":"";return x("span",{class:`md-tag ${c}`},p)})),o>0&&n.length>0&&x("p",{class:"md-search-result__terms"},Ee("search.result.term.missing"),": ",...n)))}function Mn(e){let t=e[0].score,r=[...e],o=xe(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),a=r.findIndex(l=>l.scoreQr(l,1)),...p.length?[x("details",{class:"md-search-result__more"},x("summary",{tabIndex:-1},x("div",null,p.length>0&&p.length===1?Ee("search.result.more.one"):Ee("search.result.more.other",p.length))),...p.map(l=>Qr(l,1)))]:[]];return x("li",{class:"md-search-result__item"},c)}function _n(e){return x("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>x("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?sr(r):r)))}function Kr(e){let t=`tabbed-control tabbed-control--${e}`;return x("div",{class:t,hidden:!0},x("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function An(e){return x("div",{class:"md-typeset__scrollwrap"},x("div",{class:"md-typeset__table"},e))}function Ra(e){var o;let t=xe(),r=new URL(`../${e.version}/`,t.base);return x("li",{class:"md-version__item"},x("a",{href:`${r}`,class:"md-version__link"},e.title,((o=t.version)==null?void 0:o.alias)&&e.aliases.length>0&&x("span",{class:"md-version__alias"},e.aliases[0])))}function Cn(e,t){var o;let r=xe();return e=e.filter(n=>{var i;return!((i=n.properties)!=null&&i.hidden)}),x("div",{class:"md-version"},x("button",{class:"md-version__current","aria-label":Ee("select.version")},t.title,((o=r.version)==null?void 0:o.alias)&&t.aliases.length>0&&x("span",{class:"md-version__alias"},t.aliases[0])),x("ul",{class:"md-version__list"},e.map(Ra)))}var Ia=0;function ja(e){let t=N([et(e),Ht(e)]).pipe(m(([o,n])=>o||n),K()),r=C(()=>Zo(e)).pipe(ne(ze),pt(1),He(t),m(()=>en(e)));return t.pipe(Ae(o=>o),v(()=>N([t,r])),m(([o,n])=>({active:o,offset:n})),pe())}function Fa(e,t){let{content$:r,viewport$:o}=t,n=`__tooltip2_${Ia++}`;return C(()=>{let i=new g,a=new _r(!1);i.pipe(Z(),ie(!1)).subscribe(a);let s=a.pipe(kt(c=>Le(+!c*250,kr)),K(),v(c=>c?r:S),w(c=>c.id=n),pe());N([i.pipe(m(({active:c})=>c)),s.pipe(v(c=>Ht(c,250)),Q(!1))]).pipe(m(c=>c.some(l=>l))).subscribe(a);let p=a.pipe(b(c=>c),re(s,o),m(([c,l,{size:f}])=>{let u=e.getBoundingClientRect(),d=u.width/2;if(l.role==="tooltip")return{x:d,y:8+u.height};if(u.y>=f.height/2){let{height:y}=ce(l);return{x:d,y:-16-y}}else return{x:d,y:16+u.height}}));return N([s,i,p]).subscribe(([c,{offset:l},f])=>{c.style.setProperty("--md-tooltip-host-x",`${l.x}px`),c.style.setProperty("--md-tooltip-host-y",`${l.y}px`),c.style.setProperty("--md-tooltip-x",`${f.x}px`),c.style.setProperty("--md-tooltip-y",`${f.y}px`),c.classList.toggle("md-tooltip2--top",f.y<0),c.classList.toggle("md-tooltip2--bottom",f.y>=0)}),a.pipe(b(c=>c),re(s,(c,l)=>l),b(c=>c.role==="tooltip")).subscribe(c=>{let l=ce(R(":scope > *",c));c.style.setProperty("--md-tooltip-width",`${l.width}px`),c.style.setProperty("--md-tooltip-tail","0px")}),a.pipe(K(),ve(me),re(s)).subscribe(([c,l])=>{l.classList.toggle("md-tooltip2--active",c)}),N([a.pipe(b(c=>c)),s]).subscribe(([c,l])=>{l.role==="dialog"?(e.setAttribute("aria-controls",n),e.setAttribute("aria-haspopup","dialog")):e.setAttribute("aria-describedby",n)}),a.pipe(b(c=>!c)).subscribe(()=>{e.removeAttribute("aria-controls"),e.removeAttribute("aria-describedby"),e.removeAttribute("aria-haspopup")}),ja(e).pipe(w(c=>i.next(c)),_(()=>i.complete()),m(c=>$({ref:e},c)))})}function mt(e,{viewport$:t},r=document.body){return Fa(e,{content$:new j(o=>{let n=e.title,i=wn(n);return o.next(i),e.removeAttribute("title"),r.append(i),()=>{i.remove(),e.setAttribute("title",n)}}),viewport$:t})}function Ua(e,t){let r=C(()=>N([tn(e),ze(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:a,height:s}=ce(e);return{x:o-i.x+a/2,y:n-i.y+s/2}}));return et(e).pipe(v(o=>r.pipe(m(n=>({active:o,offset:n})),Te(+!o||1/0))))}function kn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return C(()=>{let i=new g,a=i.pipe(Z(),ie(!0));return i.subscribe({next({offset:s}){e.style.setProperty("--md-tooltip-x",`${s.x}px`),e.style.setProperty("--md-tooltip-y",`${s.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),tt(e).pipe(W(a)).subscribe(s=>{e.toggleAttribute("data-md-visible",s)}),O(i.pipe(b(({active:s})=>s)),i.pipe(_e(250),b(({active:s})=>!s))).subscribe({next({active:s}){s?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Me(16,me)).subscribe(({active:s})=>{o.classList.toggle("md-tooltip--active",s)}),i.pipe(pt(125,me),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:s})=>s)).subscribe({next(s){s?e.style.setProperty("--md-tooltip-0",`${-s}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(W(a),b(s=>!(s.metaKey||s.ctrlKey))).subscribe(s=>{s.stopPropagation(),s.preventDefault()}),h(n,"mousedown").pipe(W(a),re(i)).subscribe(([s,{active:p}])=>{var c;if(s.button!==0||s.metaKey||s.ctrlKey)s.preventDefault();else if(p){s.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(c=Ie())==null||c.blur()}}),r.pipe(W(a),b(s=>s===o),Ge(125)).subscribe(()=>e.focus()),Ua(e,t).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))})}function Wa(e){return e.tagName==="CODE"?P(".c, .c1, .cm",e):[e]}function Va(e){let t=[];for(let r of Wa(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let a;for(;a=/(\(\d+\))(!)?/.exec(i.textContent);){let[,s,p]=a;if(typeof p=="undefined"){let c=i.splitText(a.index);i=c.splitText(s.length),t.push(c)}else{i.textContent=s,t.push(i);break}}}}return t}function Hn(e,t){t.append(...Array.from(e.childNodes))}function fr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,a=new Map;for(let s of Va(t)){let[,p]=s.textContent.match(/\((\d+)\)/);fe(`:scope > li:nth-child(${p})`,e)&&(a.set(p,Tn(p,i)),s.replaceWith(a.get(p)))}return a.size===0?S:C(()=>{let s=new g,p=s.pipe(Z(),ie(!0)),c=[];for(let[l,f]of a)c.push([R(".md-typeset",f),R(`:scope > li:nth-child(${l})`,e)]);return o.pipe(W(p)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of c)l?Hn(f,u):Hn(u,f)}),O(...[...a].map(([,l])=>kn(l,t,{target$:r}))).pipe(_(()=>s.complete()),pe())})}function $n(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return $n(t)}}function Pn(e,t){return C(()=>{let r=$n(e);return typeof r!="undefined"?fr(r,e,t):S})}var Rn=Lt(Br());var Da=0;function In(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return In(t)}}function za(e){return ge(e).pipe(m(({width:t})=>({scrollable:Tt(e).width>t})),te("scrollable"))}function jn(e,t){let{matches:r}=matchMedia("(hover)"),o=C(()=>{let n=new g,i=n.pipe(jr(1));n.subscribe(({scrollable:c})=>{c&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")});let a=[];if(Rn.default.isSupported()&&(e.closest(".copy")||B("content.code.copy")&&!e.closest(".no-copy"))){let c=e.closest("pre");c.id=`__code_${Da++}`;let l=Sn(c.id);c.insertBefore(l,e),B("content.tooltips")&&a.push(mt(l,{viewport$}))}let s=e.closest(".highlight");if(s instanceof HTMLElement){let c=In(s);if(typeof c!="undefined"&&(s.classList.contains("annotate")||B("content.code.annotate"))){let l=fr(c,e,t);a.push(ge(s).pipe(W(i),m(({width:f,height:u})=>f&&u),K(),v(f=>f?l:S)))}}return P(":scope > span[id]",e).length&&e.classList.add("md-code__content"),za(e).pipe(w(c=>n.next(c)),_(()=>n.complete()),m(c=>$({ref:e},c)),Re(...a))});return B("content.lazy")?tt(e).pipe(b(n=>n),Te(1),v(()=>o)):o}function Na(e,{target$:t,print$:r}){let o=!0;return O(t.pipe(m(n=>n.closest("details:not([open])")),b(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(b(n=>n||!o),w(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Fn(e,t){return C(()=>{let r=new g;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),Na(e,t).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}var Un=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.flowchartTitleText{fill:var(--md-mermaid-label-fg-color)}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel p,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel p{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color)}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}.classDiagramTitleText{fill:var(--md-mermaid-label-fg-color)}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs marker.marker.composition.class path,defs marker.marker.dependency.class path,defs marker.marker.extension.class path{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs marker.marker.aggregation.class path{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}.statediagramTitleText{fill:var(--md-mermaid-label-fg-color)}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel,.nodeLabel p{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}a .nodeLabel{text-decoration:underline}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}[id^=entity] path,[id^=entity] rect{fill:var(--md-default-bg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs .marker.oneOrMore.er *,defs .marker.onlyOne.er *,defs .marker.zeroOrMore.er *,defs .marker.zeroOrOne.er *{stroke:var(--md-mermaid-edge-color)!important}text:not([class]):last-child{fill:var(--md-mermaid-label-fg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var Gr,Qa=0;function Ka(){return typeof mermaid=="undefined"||mermaid instanceof Element?wt("https://unpkg.com/mermaid@11/dist/mermaid.min.js"):I(void 0)}function Wn(e){return e.classList.remove("mermaid"),Gr||(Gr=Ka().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Un,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),G(1))),Gr.subscribe(()=>co(null,null,function*(){e.classList.add("mermaid");let t=`__mermaid_${Qa++}`,r=x("div",{class:"mermaid"}),o=e.textContent,{svg:n,fn:i}=yield mermaid.render(t,o),a=r.attachShadow({mode:"closed"});a.innerHTML=n,e.replaceWith(r),i==null||i(a)})),Gr.pipe(m(()=>({ref:e})))}var Vn=x("table");function Dn(e){return e.replaceWith(Vn),Vn.replaceWith(An(e)),I({ref:e})}function Ya(e){let t=e.find(r=>r.checked)||e[0];return O(...e.map(r=>h(r,"change").pipe(m(()=>R(`label[for="${r.id}"]`))))).pipe(Q(R(`label[for="${t.id}"]`)),m(r=>({active:r})))}function zn(e,{viewport$:t,target$:r}){let o=R(".tabbed-labels",e),n=P(":scope > input",e),i=Kr("prev");e.append(i);let a=Kr("next");return e.append(a),C(()=>{let s=new g,p=s.pipe(Z(),ie(!0));N([s,ge(e),tt(e)]).pipe(W(p),Me(1,me)).subscribe({next([{active:c},l]){let f=De(c),{width:u}=ce(c);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let d=pr(o);(f.xd.x+l.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),N([ze(o),ge(o)]).pipe(W(p)).subscribe(([c,l])=>{let f=Tt(o);i.hidden=c.x<16,a.hidden=c.x>f.width-l.width-16}),O(h(i,"click").pipe(m(()=>-1)),h(a,"click").pipe(m(()=>1))).pipe(W(p)).subscribe(c=>{let{width:l}=ce(o);o.scrollBy({left:l*c,behavior:"smooth"})}),r.pipe(W(p),b(c=>n.includes(c))).subscribe(c=>c.click()),o.classList.add("tabbed-labels--linked");for(let c of n){let l=R(`label[for="${c.id}"]`);l.replaceChildren(x("a",{href:`#${l.htmlFor}`,tabIndex:-1},...Array.from(l.childNodes))),h(l.firstElementChild,"click").pipe(W(p),b(f=>!(f.metaKey||f.ctrlKey)),w(f=>{f.preventDefault(),f.stopPropagation()})).subscribe(()=>{history.replaceState({},"",`#${l.htmlFor}`),l.click()})}return B("content.tabs.link")&&s.pipe(Ce(1),re(t)).subscribe(([{active:c},{offset:l}])=>{let f=c.innerText.trim();if(c.hasAttribute("data-md-switching"))c.removeAttribute("data-md-switching");else{let u=e.offsetTop-l.y;for(let y of P("[data-tabs]"))for(let L of P(":scope > input",y)){let X=R(`label[for="${L.id}"]`);if(X!==c&&X.innerText.trim()===f){X.setAttribute("data-md-switching",""),L.click();break}}window.scrollTo({top:e.offsetTop-u});let d=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...d])])}}),s.pipe(W(p)).subscribe(()=>{for(let c of P("audio, video",e))c.offsetWidth&&c.autoplay?c.play().catch(()=>{}):c.pause()}),Ya(n).pipe(w(c=>s.next(c)),_(()=>s.complete()),m(c=>$({ref:e},c)))}).pipe(Ke(se))}function Nn(e,{viewport$:t,target$:r,print$:o}){return O(...P(".annotate:not(.highlight)",e).map(n=>Pn(n,{target$:r,print$:o})),...P("pre:not(.mermaid) > code",e).map(n=>jn(n,{target$:r,print$:o})),...P("pre.mermaid",e).map(n=>Wn(n)),...P("table:not([class])",e).map(n=>Dn(n)),...P("details",e).map(n=>Fn(n,{target$:r,print$:o})),...P("[data-tabs]",e).map(n=>zn(n,{viewport$:t,target$:r})),...P("[title]",e).filter(()=>B("content.tooltips")).map(n=>mt(n,{viewport$:t})))}function Ba(e,{alert$:t}){return t.pipe(v(r=>O(I(!0),I(!1).pipe(Ge(2e3))).pipe(m(o=>({message:r,active:o})))))}function qn(e,t){let r=R(".md-typeset",e);return C(()=>{let o=new g;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Ba(e,t).pipe(w(n=>o.next(n)),_(()=>o.complete()),m(n=>$({ref:e},n)))})}var Ga=0;function Ja(e,t){document.body.append(e);let{width:r}=ce(e);e.style.setProperty("--md-tooltip-width",`${r}px`),e.remove();let o=cr(t),n=typeof o!="undefined"?ze(o):I({x:0,y:0}),i=O(et(t),Ht(t)).pipe(K());return N([i,n]).pipe(m(([a,s])=>{let{x:p,y:c}=De(t),l=ce(t),f=t.closest("table");return f&&t.parentElement&&(p+=f.offsetLeft+t.parentElement.offsetLeft,c+=f.offsetTop+t.parentElement.offsetTop),{active:a,offset:{x:p-s.x+l.width/2-r/2,y:c-s.y+l.height+8}}}))}function Qn(e){let t=e.title;if(!t.length)return S;let r=`__tooltip_${Ga++}`,o=Pt(r,"inline"),n=R(".md-typeset",o);return n.innerHTML=t,C(()=>{let i=new g;return i.subscribe({next({offset:a}){o.style.setProperty("--md-tooltip-x",`${a.x}px`),o.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){o.style.removeProperty("--md-tooltip-x"),o.style.removeProperty("--md-tooltip-y")}}),O(i.pipe(b(({active:a})=>a)),i.pipe(_e(250),b(({active:a})=>!a))).subscribe({next({active:a}){a?(e.insertAdjacentElement("afterend",o),e.setAttribute("aria-describedby",r),e.removeAttribute("title")):(o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t))},complete(){o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t)}}),i.pipe(Me(16,me)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(pt(125,me),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?o.style.setProperty("--md-tooltip-0",`${-a}px`):o.style.removeProperty("--md-tooltip-0")},complete(){o.style.removeProperty("--md-tooltip-0")}}),Ja(o,e).pipe(w(a=>i.next(a)),_(()=>i.complete()),m(a=>$({ref:e},a)))}).pipe(Ke(se))}function Xa({viewport$:e}){if(!B("header.autohide"))return I(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Be(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),K()),o=Ne("search");return N([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),K(),v(n=>n?r:I(!1)),Q(!1))}function Kn(e,t){return C(()=>N([ge(e),Xa(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),K((r,o)=>r.height===o.height&&r.hidden===o.hidden),G(1))}function Yn(e,{header$:t,main$:r}){return C(()=>{let o=new g,n=o.pipe(Z(),ie(!0));o.pipe(te("active"),He(t)).subscribe(([{active:a},{hidden:s}])=>{e.classList.toggle("md-header--shadow",a&&!s),e.hidden=s});let i=ue(P("[title]",e)).pipe(b(()=>B("content.tooltips")),ne(a=>Qn(a)));return r.subscribe(o),t.pipe(W(n),m(a=>$({ref:e},a)),Re(i.pipe(W(n))))})}function Za(e,{viewport$:t,header$:r}){return mr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=ce(e);return{active:n>0&&o>=n}}),te("active"))}function Bn(e,t){return C(()=>{let r=new g;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=fe(".md-content h1");return typeof o=="undefined"?S:Za(o,t).pipe(w(n=>r.next(n)),_(()=>r.complete()),m(n=>$({ref:e},n)))})}function Gn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),K()),n=o.pipe(v(()=>ge(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),te("bottom"))));return N([o,n,t]).pipe(m(([i,{top:a,bottom:s},{offset:{y:p},size:{height:c}}])=>(c=Math.max(0,c-Math.max(0,a-p,i)-Math.max(0,c+p-s)),{offset:a-i,height:c,active:a-i<=p})),K((i,a)=>i.offset===a.offset&&i.height===a.height&&i.active===a.active))}function es(e){let t=__md_get("__palette")||{index:e.findIndex(o=>matchMedia(o.getAttribute("data-md-color-media")).matches)},r=Math.max(0,Math.min(t.index,e.length-1));return I(...e).pipe(ne(o=>h(o,"change").pipe(m(()=>o))),Q(e[r]),m(o=>({index:e.indexOf(o),color:{media:o.getAttribute("data-md-color-media"),scheme:o.getAttribute("data-md-color-scheme"),primary:o.getAttribute("data-md-color-primary"),accent:o.getAttribute("data-md-color-accent")}})),G(1))}function Jn(e){let t=P("input",e),r=x("meta",{name:"theme-color"});document.head.appendChild(r);let o=x("meta",{name:"color-scheme"});document.head.appendChild(o);let n=$t("(prefers-color-scheme: light)");return C(()=>{let i=new g;return i.subscribe(a=>{if(document.body.setAttribute("data-md-color-switching",""),a.color.media==="(prefers-color-scheme)"){let s=matchMedia("(prefers-color-scheme: light)"),p=document.querySelector(s.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']");a.color.scheme=p.getAttribute("data-md-color-scheme"),a.color.primary=p.getAttribute("data-md-color-primary"),a.color.accent=p.getAttribute("data-md-color-accent")}for(let[s,p]of Object.entries(a.color))document.body.setAttribute(`data-md-color-${s}`,p);for(let s=0;sa.key==="Enter"),re(i,(a,s)=>s)).subscribe(({index:a})=>{a=(a+1)%t.length,t[a].click(),t[a].focus()}),i.pipe(m(()=>{let a=Se("header"),s=window.getComputedStyle(a);return o.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(p=>(+p).toString(16).padStart(2,"0")).join("")})).subscribe(a=>r.content=`#${a}`),i.pipe(ve(se)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")}),es(t).pipe(W(n.pipe(Ce(1))),ct(),w(a=>i.next(a)),_(()=>i.complete()),m(a=>$({ref:e},a)))})}function Xn(e,{progress$:t}){return C(()=>{let r=new g;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(w(o=>r.next({value:o})),_(()=>r.complete()),m(o=>({ref:e,value:o})))})}var Jr=Lt(Br());function ts(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r.trimEnd()}function Zn({alert$:e}){Jr.default.isSupported()&&new j(t=>{new Jr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||ts(R(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>Ee("clipboard.copied"))).subscribe(e)}function ei(e,t){return e.protocol=t.protocol,e.hostname=t.hostname,e}function rs(e,t){let r=new Map;for(let o of P("url",e)){let n=R("loc",o),i=[ei(new URL(n.textContent),t)];r.set(`${i[0]}`,i);for(let a of P("[rel=alternate]",o)){let s=a.getAttribute("href");s!=null&&i.push(ei(new URL(s),t))}}return r}function ur(e){return un(new URL("sitemap.xml",e)).pipe(m(t=>rs(t,new URL(e))),de(()=>I(new Map)))}function os(e,t){if(!(e.target instanceof Element))return S;let r=e.target.closest("a");if(r===null)return S;if(r.target||e.metaKey||e.ctrlKey)return S;let o=new URL(r.href);return o.search=o.hash="",t.has(`${o}`)?(e.preventDefault(),I(new URL(r.href))):S}function ti(e){let t=new Map;for(let r of P(":scope > *",e.head))t.set(r.outerHTML,r);return t}function ri(e){for(let t of P("[href], [src]",e))for(let r of["href","src"]){let o=t.getAttribute(r);if(o&&!/^(?:[a-z]+:)?\/\//i.test(o)){t[r]=t[r];break}}return I(e)}function ns(e){for(let o of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...B("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let n=fe(o),i=fe(o,e);typeof n!="undefined"&&typeof i!="undefined"&&n.replaceWith(i)}let t=ti(document);for(let[o,n]of ti(e))t.has(o)?t.delete(o):document.head.appendChild(n);for(let o of t.values()){let n=o.getAttribute("name");n!=="theme-color"&&n!=="color-scheme"&&o.remove()}let r=Se("container");return We(P("script",r)).pipe(v(o=>{let n=e.createElement("script");if(o.src){for(let i of o.getAttributeNames())n.setAttribute(i,o.getAttribute(i));return o.replaceWith(n),new j(i=>{n.onload=()=>i.complete()})}else return n.textContent=o.textContent,o.replaceWith(n),S}),Z(),ie(document))}function oi({location$:e,viewport$:t,progress$:r}){let o=xe();if(location.protocol==="file:")return S;let n=ur(o.base);I(document).subscribe(ri);let i=h(document.body,"click").pipe(He(n),v(([p,c])=>os(p,c)),pe()),a=h(window,"popstate").pipe(m(ye),pe());i.pipe(re(t)).subscribe(([p,{offset:c}])=>{history.replaceState(c,""),history.pushState(null,"",p)}),O(i,a).subscribe(e);let s=e.pipe(te("pathname"),v(p=>fn(p,{progress$:r}).pipe(de(()=>(lt(p,!0),S)))),v(ri),v(ns),pe());return O(s.pipe(re(e,(p,c)=>c)),s.pipe(v(()=>e),te("hash")),e.pipe(K((p,c)=>p.pathname===c.pathname&&p.hash===c.hash),v(()=>i),w(()=>history.back()))).subscribe(p=>{var c,l;history.state!==null||!p.hash?window.scrollTo(0,(l=(c=history.state)==null?void 0:c.y)!=null?l:0):(history.scrollRestoration="auto",pn(p.hash),history.scrollRestoration="manual")}),e.subscribe(()=>{history.scrollRestoration="manual"}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),t.pipe(te("offset"),_e(100)).subscribe(({offset:p})=>{history.replaceState(p,"")}),s}var ni=Lt(qr());function ii(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,a)=>`${i}${a}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").replace(/&/g,"&").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return a=>(0,ni.default)(a).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function It(e){return e.type===1}function dr(e){return e.type===3}function ai(e,t){let r=yn(e);return O(I(location.protocol!=="file:"),Ne("search")).pipe(Ae(o=>o),v(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:B("search.suggest")}}})),r}function si(e){var l;let{selectedVersionSitemap:t,selectedVersionBaseURL:r,currentLocation:o,currentBaseURL:n}=e,i=(l=Xr(n))==null?void 0:l.pathname;if(i===void 0)return;let a=ss(o.pathname,i);if(a===void 0)return;let s=ps(t.keys());if(!t.has(s))return;let p=Xr(a,s);if(!p||!t.has(p.href))return;let c=Xr(a,r);if(c)return c.hash=o.hash,c.search=o.search,c}function Xr(e,t){try{return new URL(e,t)}catch(r){return}}function ss(e,t){if(e.startsWith(t))return e.slice(t.length)}function cs(e,t){let r=Math.min(e.length,t.length),o;for(o=0;oS)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:a,aliases:s})=>a===i||s.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),v(n=>h(document.body,"click").pipe(b(i=>!i.metaKey&&!i.ctrlKey),re(o),v(([i,a])=>{if(i.target instanceof Element){let s=i.target.closest("a");if(s&&!s.target&&n.has(s.href)){let p=s.href;return!i.target.closest(".md-version")&&n.get(p)===a?S:(i.preventDefault(),I(new URL(p)))}}return S}),v(i=>ur(i).pipe(m(a=>{var s;return(s=si({selectedVersionSitemap:a,selectedVersionBaseURL:i,currentLocation:ye(),currentBaseURL:t.base}))!=null?s:i})))))).subscribe(n=>lt(n,!0)),N([r,o]).subscribe(([n,i])=>{R(".md-header__topic").appendChild(Cn(n,i))}),e.pipe(v(()=>o)).subscribe(n=>{var s;let i=new URL(t.base),a=__md_get("__outdated",sessionStorage,i);if(a===null){a=!0;let p=((s=t.version)==null?void 0:s.default)||"latest";Array.isArray(p)||(p=[p]);e:for(let c of p)for(let l of n.aliases.concat(n.version))if(new RegExp(c,"i").test(l)){a=!1;break e}__md_set("__outdated",a,sessionStorage,i)}if(a)for(let p of ae("outdated"))p.hidden=!1})}function ls(e,{worker$:t}){let{searchParams:r}=ye();r.has("q")&&(Je("search",!0),e.value=r.get("q"),e.focus(),Ne("search").pipe(Ae(i=>!i)).subscribe(()=>{let i=ye();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=et(e),n=O(t.pipe(Ae(It)),h(e,"keyup"),o).pipe(m(()=>e.value),K());return N([n,o]).pipe(m(([i,a])=>({value:i,focus:a})),G(1))}function pi(e,{worker$:t}){let r=new g,o=r.pipe(Z(),ie(!0));N([t.pipe(Ae(It)),r],(i,a)=>a).pipe(te("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(te("focus")).subscribe(({focus:i})=>{i&&Je("search",i)}),h(e.form,"reset").pipe(W(o)).subscribe(()=>e.focus());let n=R("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),ls(e,{worker$:t}).pipe(w(i=>r.next(i)),_(()=>r.complete()),m(i=>$({ref:e},i)),G(1))}function li(e,{worker$:t,query$:r}){let o=new g,n=on(e.parentElement).pipe(b(Boolean)),i=e.parentElement,a=R(":scope > :first-child",e),s=R(":scope > :last-child",e);Ne("search").subscribe(l=>{s.setAttribute("role",l?"list":"presentation"),s.hidden=!l}),o.pipe(re(r),Wr(t.pipe(Ae(It)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:a.textContent=f.length?Ee("search.result.none"):Ee("search.result.placeholder");break;case 1:a.textContent=Ee("search.result.one");break;default:let u=sr(l.length);a.textContent=Ee("search.result.other",u)}});let p=o.pipe(w(()=>s.innerHTML=""),v(({items:l})=>O(I(...l.slice(0,10)),I(...l.slice(10)).pipe(Be(4),Dr(n),v(([f])=>f)))),m(Mn),pe());return p.subscribe(l=>s.appendChild(l)),p.pipe(ne(l=>{let f=fe("details",l);return typeof f=="undefined"?S:h(f,"toggle").pipe(W(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(b(dr),m(({data:l})=>l)).pipe(w(l=>o.next(l)),_(()=>o.complete()),m(l=>$({ref:e},l)))}function ms(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=ye();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function mi(e,t){let r=new g,o=r.pipe(Z(),ie(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(W(o)).subscribe(n=>n.preventDefault()),ms(e,t).pipe(w(n=>r.next(n)),_(()=>r.complete()),m(n=>$({ref:e},n)))}function fi(e,{worker$:t,keyboard$:r}){let o=new g,n=Se("search-query"),i=O(h(n,"keydown"),h(n,"focus")).pipe(ve(se),m(()=>n.value),K());return o.pipe(He(i),m(([{suggest:s},p])=>{let c=p.split(/([\s-]+)/);if(s!=null&&s.length&&c[c.length-1]){let l=s[s.length-1];l.startsWith(c[c.length-1])&&(c[c.length-1]=l)}else c.length=0;return c})).subscribe(s=>e.innerHTML=s.join("").replace(/\s/g," ")),r.pipe(b(({mode:s})=>s==="search")).subscribe(s=>{switch(s.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(b(dr),m(({data:s})=>s)).pipe(w(s=>o.next(s)),_(()=>o.complete()),m(()=>({ref:e})))}function ui(e,{index$:t,keyboard$:r}){let o=xe();try{let n=ai(o.search,t),i=Se("search-query",e),a=Se("search-result",e);h(e,"click").pipe(b(({target:p})=>p instanceof Element&&!!p.closest("a"))).subscribe(()=>Je("search",!1)),r.pipe(b(({mode:p})=>p==="search")).subscribe(p=>{let c=Ie();switch(p.type){case"Enter":if(c===i){let l=new Map;for(let f of P(":first-child [href]",a)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}p.claim()}break;case"Escape":case"Tab":Je("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof c=="undefined")i.focus();else{let l=[i,...P(":not(details) > [href], summary, details[open] [href]",a)],f=Math.max(0,(Math.max(0,l.indexOf(c))+l.length+(p.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}p.claim();break;default:i!==Ie()&&i.focus()}}),r.pipe(b(({mode:p})=>p==="global")).subscribe(p=>{switch(p.type){case"f":case"s":case"/":i.focus(),i.select(),p.claim();break}});let s=pi(i,{worker$:n});return O(s,li(a,{worker$:n,query$:s})).pipe(Re(...ae("search-share",e).map(p=>mi(p,{query$:s})),...ae("search-suggest",e).map(p=>fi(p,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ye}}function di(e,{index$:t,location$:r}){return N([t,r.pipe(Q(ye()),b(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>ii(o.config)(n.searchParams.get("h"))),m(o=>{var a;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let s=i.nextNode();s;s=i.nextNode())if((a=s.parentElement)!=null&&a.offsetHeight){let p=s.textContent,c=o(p);c.length>p.length&&n.set(s,c)}for(let[s,p]of n){let{childNodes:c}=x("span",null,p);s.replaceWith(...Array.from(c))}return{ref:e,nodes:n}}))}function fs(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return N([r,t]).pipe(m(([{offset:i,height:a},{offset:{y:s}}])=>(a=a+Math.min(n,Math.max(0,s-i))-n,{height:a,locked:s>=i+n})),K((i,a)=>i.height===a.height&&i.locked===a.locked))}function Zr(e,o){var n=o,{header$:t}=n,r=so(n,["header$"]);let i=R(".md-sidebar__scrollwrap",e),{y:a}=De(i);return C(()=>{let s=new g,p=s.pipe(Z(),ie(!0)),c=s.pipe(Me(0,me));return c.pipe(re(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*a}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),c.pipe(Ae()).subscribe(()=>{for(let l of P(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=ce(f);f.scrollTo({top:u-d/2})}}}),ue(P("label[tabindex]",e)).pipe(ne(l=>h(l,"click").pipe(ve(se),m(()=>l),W(p)))).subscribe(l=>{let f=R(`[id="${l.htmlFor}"]`);R(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),fs(e,r).pipe(w(l=>s.next(l)),_(()=>s.complete()),m(l=>$({ref:e},l)))})}function hi(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return st(je(`${r}/releases/latest`).pipe(de(()=>S),m(o=>({version:o.tag_name})),Ve({})),je(r).pipe(de(()=>S),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),Ve({}))).pipe(m(([o,n])=>$($({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return je(r).pipe(m(o=>({repositories:o.public_repos})),Ve({}))}}function bi(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return st(je(`${r}/releases/permalink/latest`).pipe(de(()=>S),m(({tag_name:o})=>({version:o})),Ve({})),je(r).pipe(de(()=>S),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),Ve({}))).pipe(m(([o,n])=>$($({},o),n)))}function vi(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return hi(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return bi(r,o)}return S}var us;function ds(e){return us||(us=C(()=>{let t=__md_get("__source",sessionStorage);if(t)return I(t);if(ae("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return S}return vi(e.href).pipe(w(o=>__md_set("__source",o,sessionStorage)))}).pipe(de(()=>S),b(t=>Object.keys(t).length>0),m(t=>({facts:t})),G(1)))}function gi(e){let t=R(":scope > :last-child",e);return C(()=>{let r=new g;return r.subscribe(({facts:o})=>{t.appendChild(_n(o)),t.classList.add("md-source__repository--active")}),ds(e).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}function hs(e,{viewport$:t,header$:r}){return ge(document.body).pipe(v(()=>mr(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),te("hidden"))}function yi(e,t){return C(()=>{let r=new g;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(B("navigation.tabs.sticky")?I({hidden:!1}):hs(e,t)).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}function bs(e,{viewport$:t,header$:r}){let o=new Map,n=P(".md-nav__link",e);for(let s of n){let p=decodeURIComponent(s.hash.substring(1)),c=fe(`[id="${p}"]`);typeof c!="undefined"&&o.set(s,c)}let i=r.pipe(te("height"),m(({height:s})=>{let p=Se("main"),c=R(":scope > :first-child",p);return s+.8*(c.offsetTop-p.offsetTop)}),pe());return ge(document.body).pipe(te("height"),v(s=>C(()=>{let p=[];return I([...o].reduce((c,[l,f])=>{for(;p.length&&o.get(p[p.length-1]).tagName>=f.tagName;)p.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return c.set([...p=[...p,l]].reverse(),u)},new Map))}).pipe(m(p=>new Map([...p].sort(([,c],[,l])=>c-l))),He(i),v(([p,c])=>t.pipe(Fr(([l,f],{offset:{y:u},size:d})=>{let y=u+d.height>=Math.floor(s.height);for(;f.length;){let[,L]=f[0];if(L-c=u&&!y)f=[l.pop(),...f];else break}return[l,f]},[[],[...p]]),K((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([s,p])=>({prev:s.map(([c])=>c),next:p.map(([c])=>c)})),Q({prev:[],next:[]}),Be(2,1),m(([s,p])=>s.prev.length{let i=new g,a=i.pipe(Z(),ie(!0));if(i.subscribe(({prev:s,next:p})=>{for(let[c]of p)c.classList.remove("md-nav__link--passed"),c.classList.remove("md-nav__link--active");for(let[c,[l]]of s.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",c===s.length-1)}),B("toc.follow")){let s=O(t.pipe(_e(1),m(()=>{})),t.pipe(_e(250),m(()=>"smooth")));i.pipe(b(({prev:p})=>p.length>0),He(o.pipe(ve(se))),re(s)).subscribe(([[{prev:p}],c])=>{let[l]=p[p.length-1];if(l.offsetHeight){let f=cr(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=ce(f);f.scrollTo({top:u-d/2,behavior:c})}}})}return B("navigation.tracking")&&t.pipe(W(a),te("offset"),_e(250),Ce(1),W(n.pipe(Ce(1))),ct({delay:250}),re(i)).subscribe(([,{prev:s}])=>{let p=ye(),c=s[s.length-1];if(c&&c.length){let[l]=c,{hash:f}=new URL(l.href);p.hash!==f&&(p.hash=f,history.replaceState({},"",`${p}`))}else p.hash="",history.replaceState({},"",`${p}`)}),bs(e,{viewport$:t,header$:r}).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))})}function vs(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:a}})=>a),Be(2,1),m(([a,s])=>a>s&&s>0),K()),i=r.pipe(m(({active:a})=>a));return N([i,n]).pipe(m(([a,s])=>!(a&&s)),K(),W(o.pipe(Ce(1))),ie(!0),ct({delay:250}),m(a=>({hidden:a})))}function Ei(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new g,a=i.pipe(Z(),ie(!0));return i.subscribe({next({hidden:s}){e.hidden=s,s?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(W(a),te("height")).subscribe(({height:s})=>{e.style.top=`${s+16}px`}),h(e,"click").subscribe(s=>{s.preventDefault(),window.scrollTo({top:0})}),vs(e,{viewport$:t,main$:o,target$:n}).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))}function wi({document$:e,viewport$:t}){e.pipe(v(()=>P(".md-ellipsis")),ne(r=>tt(r).pipe(W(e.pipe(Ce(1))),b(o=>o),m(()=>r),Te(1))),b(r=>r.offsetWidth{let o=r.innerText,n=r.closest("a")||r;return n.title=o,B("content.tooltips")?mt(n,{viewport$:t}).pipe(W(e.pipe(Ce(1))),_(()=>n.removeAttribute("title"))):S})).subscribe(),B("content.tooltips")&&e.pipe(v(()=>P(".md-status")),ne(r=>mt(r,{viewport$:t}))).subscribe()}function Ti({document$:e,tablet$:t}){e.pipe(v(()=>P(".md-toggle--indeterminate")),w(r=>{r.indeterminate=!0,r.checked=!1}),ne(r=>h(r,"change").pipe(Vr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),re(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function gs(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function Si({document$:e}){e.pipe(v(()=>P("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),b(gs),ne(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function Oi({viewport$:e,tablet$:t}){N([Ne("search"),t]).pipe(m(([r,o])=>r&&!o),v(r=>I(r).pipe(Ge(r?400:100))),re(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function ys(){return location.protocol==="file:"?wt(`${new URL("search/search_index.js",eo.base)}`).pipe(m(()=>__index),G(1)):je(new URL("search/search_index.json",eo.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var ot=Go(),Ft=sn(),Ot=ln(Ft),to=an(),Oe=gn(),hr=$t("(min-width: 60em)"),Mi=$t("(min-width: 76.25em)"),_i=mn(),eo=xe(),Ai=document.forms.namedItem("search")?ys():Ye,ro=new g;Zn({alert$:ro});var oo=new g;B("navigation.instant")&&oi({location$:Ft,viewport$:Oe,progress$:oo}).subscribe(ot);var Li;((Li=eo.version)==null?void 0:Li.provider)==="mike"&&ci({document$:ot});O(Ft,Ot).pipe(Ge(125)).subscribe(()=>{Je("drawer",!1),Je("search",!1)});to.pipe(b(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=fe("link[rel=prev]");typeof t!="undefined"&<(t);break;case"n":case".":let r=fe("link[rel=next]");typeof r!="undefined"&<(r);break;case"Enter":let o=Ie();o instanceof HTMLLabelElement&&o.click()}});wi({viewport$:Oe,document$:ot});Ti({document$:ot,tablet$:hr});Si({document$:ot});Oi({viewport$:Oe,tablet$:hr});var rt=Kn(Se("header"),{viewport$:Oe}),jt=ot.pipe(m(()=>Se("main")),v(e=>Gn(e,{viewport$:Oe,header$:rt})),G(1)),xs=O(...ae("consent").map(e=>En(e,{target$:Ot})),...ae("dialog").map(e=>qn(e,{alert$:ro})),...ae("palette").map(e=>Jn(e)),...ae("progress").map(e=>Xn(e,{progress$:oo})),...ae("search").map(e=>ui(e,{index$:Ai,keyboard$:to})),...ae("source").map(e=>gi(e))),Es=C(()=>O(...ae("announce").map(e=>xn(e)),...ae("content").map(e=>Nn(e,{viewport$:Oe,target$:Ot,print$:_i})),...ae("content").map(e=>B("search.highlight")?di(e,{index$:Ai,location$:Ft}):S),...ae("header").map(e=>Yn(e,{viewport$:Oe,header$:rt,main$:jt})),...ae("header-title").map(e=>Bn(e,{viewport$:Oe,header$:rt})),...ae("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?zr(Mi,()=>Zr(e,{viewport$:Oe,header$:rt,main$:jt})):zr(hr,()=>Zr(e,{viewport$:Oe,header$:rt,main$:jt}))),...ae("tabs").map(e=>yi(e,{viewport$:Oe,header$:rt})),...ae("toc").map(e=>xi(e,{viewport$:Oe,header$:rt,main$:jt,target$:Ot})),...ae("top").map(e=>Ei(e,{viewport$:Oe,header$:rt,main$:jt,target$:Ot})))),Ci=ot.pipe(v(()=>Es),Re(xs),G(1));Ci.subscribe();window.document$=ot;window.location$=Ft;window.target$=Ot;window.keyboard$=to;window.viewport$=Oe;window.tablet$=hr;window.screen$=Mi;window.print$=_i;window.alert$=ro;window.progress$=oo;window.component$=Ci;})(); +//# sourceMappingURL=bundle.f55a23d4.min.js.map + diff --git a/assets/javascripts/bundle.f55a23d4.min.js.map b/assets/javascripts/bundle.f55a23d4.min.js.map new file mode 100644 index 0000000..e3de73f --- /dev/null +++ b/assets/javascripts/bundle.f55a23d4.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/escape-html/index.js", "node_modules/clipboard/dist/clipboard.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/tslib/tslib.es6.mjs", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/BehaviorSubject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/QueueAction.ts", "node_modules/rxjs/src/internal/scheduler/QueueScheduler.ts", "node_modules/rxjs/src/internal/scheduler/queue.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounce.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/hover/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/tooltip2/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/tooltip/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/findurl/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/ellipsis/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*\n * Copyright (c) 2016-2025 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchEllipsis,\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 60em)\")\nconst screen$ = watchMedia(\"(min-width: 76.25em)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchEllipsis({ viewport$, document$ })\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/******************************************************************************\nCopyright (c) Microsoft Corporation.\n\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\nPERFORMANCE OF THIS SOFTWARE.\n***************************************************************************** */\n/* global Reflect, Promise, SuppressedError, Symbol, Iterator */\n\nvar extendStatics = function(d, b) {\n extendStatics = Object.setPrototypeOf ||\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\n return extendStatics(d, b);\n};\n\nexport function __extends(d, b) {\n if (typeof b !== \"function\" && b !== null)\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\n extendStatics(d, b);\n function __() { this.constructor = d; }\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\n}\n\nexport var __assign = function() {\n __assign = Object.assign || function __assign(t) {\n for (var s, i = 1, n = arguments.length; i < n; i++) {\n s = arguments[i];\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\n }\n return t;\n }\n return __assign.apply(this, arguments);\n}\n\nexport function __rest(s, e) {\n var t = {};\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\n t[p] = s[p];\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\n t[p[i]] = s[p[i]];\n }\n return t;\n}\n\nexport function __decorate(decorators, target, key, desc) {\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\n return c > 3 && r && Object.defineProperty(target, key, r), r;\n}\n\nexport function __param(paramIndex, decorator) {\n return function (target, key) { decorator(target, key, paramIndex); }\n}\n\nexport function __esDecorate(ctor, descriptorIn, decorators, contextIn, initializers, extraInitializers) {\n function accept(f) { if (f !== void 0 && typeof f !== \"function\") throw new TypeError(\"Function expected\"); return f; }\n var kind = contextIn.kind, key = kind === \"getter\" ? \"get\" : kind === \"setter\" ? \"set\" : \"value\";\n var target = !descriptorIn && ctor ? contextIn[\"static\"] ? ctor : ctor.prototype : null;\n var descriptor = descriptorIn || (target ? Object.getOwnPropertyDescriptor(target, contextIn.name) : {});\n var _, done = false;\n for (var i = decorators.length - 1; i >= 0; i--) {\n var context = {};\n for (var p in contextIn) context[p] = p === \"access\" ? {} : contextIn[p];\n for (var p in contextIn.access) context.access[p] = contextIn.access[p];\n context.addInitializer = function (f) { if (done) throw new TypeError(\"Cannot add initializers after decoration has completed\"); extraInitializers.push(accept(f || null)); };\n var result = (0, decorators[i])(kind === \"accessor\" ? { get: descriptor.get, set: descriptor.set } : descriptor[key], context);\n if (kind === \"accessor\") {\n if (result === void 0) continue;\n if (result === null || typeof result !== \"object\") throw new TypeError(\"Object expected\");\n if (_ = accept(result.get)) descriptor.get = _;\n if (_ = accept(result.set)) descriptor.set = _;\n if (_ = accept(result.init)) initializers.unshift(_);\n }\n else if (_ = accept(result)) {\n if (kind === \"field\") initializers.unshift(_);\n else descriptor[key] = _;\n }\n }\n if (target) Object.defineProperty(target, contextIn.name, descriptor);\n done = true;\n};\n\nexport function __runInitializers(thisArg, initializers, value) {\n var useValue = arguments.length > 2;\n for (var i = 0; i < initializers.length; i++) {\n value = useValue ? initializers[i].call(thisArg, value) : initializers[i].call(thisArg);\n }\n return useValue ? value : void 0;\n};\n\nexport function __propKey(x) {\n return typeof x === \"symbol\" ? x : \"\".concat(x);\n};\n\nexport function __setFunctionName(f, name, prefix) {\n if (typeof name === \"symbol\") name = name.description ? \"[\".concat(name.description, \"]\") : \"\";\n return Object.defineProperty(f, \"name\", { configurable: true, value: prefix ? \"\".concat(prefix, \" \", name) : name });\n};\n\nexport function __metadata(metadataKey, metadataValue) {\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\n}\n\nexport function __awaiter(thisArg, _arguments, P, generator) {\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\n return new (P || (P = Promise))(function (resolve, reject) {\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\n step((generator = generator.apply(thisArg, _arguments || [])).next());\n });\n}\n\nexport function __generator(thisArg, body) {\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g = Object.create((typeof Iterator === \"function\" ? Iterator : Object).prototype);\n return g.next = verb(0), g[\"throw\"] = verb(1), g[\"return\"] = verb(2), typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\n function verb(n) { return function (v) { return step([n, v]); }; }\n function step(op) {\n if (f) throw new TypeError(\"Generator is already executing.\");\n while (g && (g = 0, op[0] && (_ = 0)), _) try {\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\n if (y = 0, t) op = [op[0] & 2, t.value];\n switch (op[0]) {\n case 0: case 1: t = op; break;\n case 4: _.label++; return { value: op[1], done: false };\n case 5: _.label++; y = op[1]; op = [0]; continue;\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\n default:\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\n if (t[2]) _.ops.pop();\n _.trys.pop(); continue;\n }\n op = body.call(thisArg, _);\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\n }\n}\n\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n var desc = Object.getOwnPropertyDescriptor(m, k);\n if (!desc || (\"get\" in desc ? !m.__esModule : desc.writable || desc.configurable)) {\n desc = { enumerable: true, get: function() { return m[k]; } };\n }\n Object.defineProperty(o, k2, desc);\n}) : (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n o[k2] = m[k];\n});\n\nexport function __exportStar(m, o) {\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\n}\n\nexport function __values(o) {\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\n if (m) return m.call(o);\n if (o && typeof o.length === \"number\") return {\n next: function () {\n if (o && i >= o.length) o = void 0;\n return { value: o && o[i++], done: !o };\n }\n };\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\n}\n\nexport function __read(o, n) {\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\n if (!m) return o;\n var i = m.call(o), r, ar = [], e;\n try {\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\n }\n catch (error) { e = { error: error }; }\n finally {\n try {\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\n }\n finally { if (e) throw e.error; }\n }\n return ar;\n}\n\n/** @deprecated */\nexport function __spread() {\n for (var ar = [], i = 0; i < arguments.length; i++)\n ar = ar.concat(__read(arguments[i]));\n return ar;\n}\n\n/** @deprecated */\nexport function __spreadArrays() {\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\n r[k] = a[j];\n return r;\n}\n\nexport function __spreadArray(to, from, pack) {\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\n if (ar || !(i in from)) {\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\n ar[i] = from[i];\n }\n }\n return to.concat(ar || Array.prototype.slice.call(from));\n}\n\nexport function __await(v) {\n return this instanceof __await ? (this.v = v, this) : new __await(v);\n}\n\nexport function __asyncGenerator(thisArg, _arguments, generator) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\n return i = Object.create((typeof AsyncIterator === \"function\" ? AsyncIterator : Object).prototype), verb(\"next\"), verb(\"throw\"), verb(\"return\", awaitReturn), i[Symbol.asyncIterator] = function () { return this; }, i;\n function awaitReturn(f) { return function (v) { return Promise.resolve(v).then(f, reject); }; }\n function verb(n, f) { if (g[n]) { i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; if (f) i[n] = f(i[n]); } }\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\n function fulfill(value) { resume(\"next\", value); }\n function reject(value) { resume(\"throw\", value); }\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\n}\n\nexport function __asyncDelegator(o) {\n var i, p;\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: false } : f ? f(v) : v; } : f; }\n}\n\nexport function __asyncValues(o) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var m = o[Symbol.asyncIterator], i;\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\n}\n\nexport function __makeTemplateObject(cooked, raw) {\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\n return cooked;\n};\n\nvar __setModuleDefault = Object.create ? (function(o, v) {\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\n}) : function(o, v) {\n o[\"default\"] = v;\n};\n\nexport function __importStar(mod) {\n if (mod && mod.__esModule) return mod;\n var result = {};\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\n __setModuleDefault(result, mod);\n return result;\n}\n\nexport function __importDefault(mod) {\n return (mod && mod.__esModule) ? mod : { default: mod };\n}\n\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\n}\n\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\n}\n\nexport function __classPrivateFieldIn(state, receiver) {\n if (receiver === null || (typeof receiver !== \"object\" && typeof receiver !== \"function\")) throw new TypeError(\"Cannot use 'in' operator on non-object\");\n return typeof state === \"function\" ? receiver === state : state.has(receiver);\n}\n\nexport function __addDisposableResource(env, value, async) {\n if (value !== null && value !== void 0) {\n if (typeof value !== \"object\" && typeof value !== \"function\") throw new TypeError(\"Object expected.\");\n var dispose, inner;\n if (async) {\n if (!Symbol.asyncDispose) throw new TypeError(\"Symbol.asyncDispose is not defined.\");\n dispose = value[Symbol.asyncDispose];\n }\n if (dispose === void 0) {\n if (!Symbol.dispose) throw new TypeError(\"Symbol.dispose is not defined.\");\n dispose = value[Symbol.dispose];\n if (async) inner = dispose;\n }\n if (typeof dispose !== \"function\") throw new TypeError(\"Object not disposable.\");\n if (inner) dispose = function() { try { inner.call(this); } catch (e) { return Promise.reject(e); } };\n env.stack.push({ value: value, dispose: dispose, async: async });\n }\n else if (async) {\n env.stack.push({ async: true });\n }\n return value;\n}\n\nvar _SuppressedError = typeof SuppressedError === \"function\" ? SuppressedError : function (error, suppressed, message) {\n var e = new Error(message);\n return e.name = \"SuppressedError\", e.error = error, e.suppressed = suppressed, e;\n};\n\nexport function __disposeResources(env) {\n function fail(e) {\n env.error = env.hasError ? new _SuppressedError(e, env.error, \"An error was suppressed during disposal.\") : e;\n env.hasError = true;\n }\n var r, s = 0;\n function next() {\n while (r = env.stack.pop()) {\n try {\n if (!r.async && s === 1) return s = 0, env.stack.push(r), Promise.resolve().then(next);\n if (r.dispose) {\n var result = r.dispose.call(r.value);\n if (r.async) return s |= 2, Promise.resolve(result).then(next, function(e) { fail(e); return next(); });\n }\n else s |= 1;\n }\n catch (e) {\n fail(e);\n }\n }\n if (s === 1) return env.hasError ? Promise.reject(env.error) : Promise.resolve();\n if (env.hasError) throw env.error;\n }\n return next();\n}\n\nexport default {\n __extends,\n __assign,\n __rest,\n __decorate,\n __param,\n __metadata,\n __awaiter,\n __generator,\n __createBinding,\n __exportStar,\n __values,\n __read,\n __spread,\n __spreadArrays,\n __spreadArray,\n __await,\n __asyncGenerator,\n __asyncDelegator,\n __asyncValues,\n __makeTemplateObject,\n __importStar,\n __importDefault,\n __classPrivateFieldGet,\n __classPrivateFieldSet,\n __classPrivateFieldIn,\n __addDisposableResource,\n __disposeResources,\n};\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n */\nexport class Subscription implements SubscriptionLike {\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param value The `next` value.\n */\n next(value: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param err The `error` exception.\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as ((value: T) => void) | undefined,\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent.\n * @param subscriber The stopped subscriber.\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @param subscribe The function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @param subscribe the subscriber function to be passed to the Observable constructor\n * @return A new observable.\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @param operator the operator defining the operation to take on the observable\n * @return A new observable with the Operator applied.\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param observerOrNext Either an {@link Observer} with some or all callback methods,\n * or the `next` handler that is called for each value emitted from the subscribed Observable.\n * @param error A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param complete A handler for a terminal event resulting from successful completion.\n * @return A subscription reference to the registered handlers.\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next A handler for each value emitted by the observable.\n * @return A promise that either resolves on observable completion or\n * rejects with the handled error.\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @return This instance of the observable.\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n *\n * @return The Observable result of all the operators having been called\n * in the order they were passed in.\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return Observable that this Subject casts to.\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { Subject } from './Subject';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\n\n/**\n * A variant of Subject that requires an initial value and emits its current\n * value whenever it is subscribed to.\n */\nexport class BehaviorSubject extends Subject {\n constructor(private _value: T) {\n super();\n }\n\n get value(): T {\n return this.getValue();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n const subscription = super._subscribe(subscriber);\n !subscription.closed && subscriber.next(this._value);\n return subscription;\n }\n\n getValue(): T {\n const { hasError, thrownError, _value } = this;\n if (hasError) {\n throw thrownError;\n }\n this._throwIfClosed();\n return _value;\n }\n\n next(value: T): void {\n super.next((this._value = value));\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param _bufferSize The size of the buffer to replay on subscription\n * @param _windowTime The amount of time the buffered items will stay buffered\n * @param _timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param state Some contextual data that the `work` function uses when called by the\n * Scheduler.\n * @param delay Time to wait before executing the work, where the time unit is implicit\n * and defined by the Scheduler.\n * @return A subscription in order to be able to unsubscribe the scheduled work.\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param work A function representing a task, or some unit of work to be\n * executed by the Scheduler.\n * @param delay Time to wait before executing the work, where the time unit is\n * implicit and defined by the Scheduler itself.\n * @param state Some contextual data that the `work` function uses when called\n * by the Scheduler.\n * @return A subscription in order to be able to unsubscribe the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { Subscription } from '../Subscription';\nimport { QueueScheduler } from './QueueScheduler';\nimport { SchedulerAction } from '../types';\nimport { TimerHandle } from './timerHandle';\n\nexport class QueueAction extends AsyncAction {\n constructor(protected scheduler: QueueScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (delay > 0) {\n return super.schedule(state, delay);\n }\n this.delay = delay;\n this.state = state;\n this.scheduler.flush(this);\n return this;\n }\n\n public execute(state: T, delay: number): any {\n return delay > 0 || this.closed ? super.execute(state, delay) : this._execute(state, delay);\n }\n\n protected requestAsyncId(scheduler: QueueScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n\n if ((delay != null && delay > 0) || (delay == null && this.delay > 0)) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n\n // Otherwise flush the scheduler starting with this action.\n scheduler.flush(this);\n\n // HACK: In the past, this was returning `void`. However, `void` isn't a valid\n // `TimerHandle`, and generally the return value here isn't really used. So the\n // compromise is to return `0` which is both \"falsy\" and a valid `TimerHandle`,\n // as opposed to refactoring every other instanceo of `requestAsyncId`.\n return 0;\n }\n}\n", "import { AsyncScheduler } from './AsyncScheduler';\n\nexport class QueueScheduler extends AsyncScheduler {\n}\n", "import { QueueAction } from './QueueAction';\nimport { QueueScheduler } from './QueueScheduler';\n\n/**\n *\n * Queue Scheduler\n *\n * Put every next task on a queue, instead of executing it immediately\n *\n * `queue` scheduler, when used with delay, behaves the same as {@link asyncScheduler} scheduler.\n *\n * When used without delay, it schedules given task synchronously - executes it right when\n * it is scheduled. However when called recursively, that is when inside the scheduled task,\n * another task is scheduled with queue scheduler, instead of executing immediately as well,\n * that task will be put on a queue and wait for current one to finish.\n *\n * This means that when you execute task with `queue` scheduler, you are sure it will end\n * before any other task scheduled with that scheduler will start.\n *\n * ## Examples\n * Schedule recursively first, then do something\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(() => {\n * queueScheduler.schedule(() => console.log('second')); // will not happen now, but will be put on a queue\n *\n * console.log('first');\n * });\n *\n * // Logs:\n * // \"first\"\n * // \"second\"\n * ```\n *\n * Reschedule itself recursively\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(function(state) {\n * if (state !== 0) {\n * console.log('before', state);\n * this.schedule(state - 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * console.log('after', state);\n * }\n * }, 0, 3);\n *\n * // In scheduler that runs recursively, you would expect:\n * // \"before\", 3\n * // \"before\", 2\n * // \"before\", 1\n * // \"after\", 1\n * // \"after\", 2\n * // \"after\", 3\n *\n * // But with queue it logs:\n * // \"before\", 3\n * // \"after\", 3\n * // \"before\", 2\n * // \"after\", 2\n * // \"before\", 1\n * // \"after\", 1\n * ```\n */\n\nexport const queueScheduler = new QueueScheduler(QueueAction);\n\n/**\n * @deprecated Renamed to {@link queueScheduler}. Will be removed in v8.\n */\nexport const queue = queueScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && id === scheduler._scheduled && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n let flushId;\n if (action) {\n flushId = action.id;\n } else {\n flushId = this._scheduled;\n this._scheduled = undefined;\n }\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an +

+
+

Walkthrough

+

1. Go to the clustering tab

+

Once on the platform, go to the clustering tab in the menu on the left of the screen.

+

On here, phospho runs various algorithms to analyze your user interactions and detect patterns.

+

We group similar interactions together to help you understand what your users are talking about.

+

2. Run the clustering

+

Click on Run cluser detection to start the process.

+

Clusters

+
+

Info

+

Clustering is not yet a continuous process, you will need to re-run it +manually to get the latest results.

+
+

How it works

+

Phospho uses the phospho intent-embed model to represent user interactions in a high-dimensional space. Then, we use clustering techniques to group similar user messages together. +Finaly, we generate a summary of the clusters to help you understand what your users are talking about.

+

Next steps

+
+
    +
  • +

    LLM as a judge

    +
    +

    Leverage LLM as a judge techniques to analyze your LLM app's performance. Quick and simple setup

    +

    Read more

    +
  • +
  • +

    Understand your data

    +
    +

    Get insights on your data through visualization, clustering and more. Quick and easy

    +

    Read more

    +
  • +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/guides/welcome-guide/index.html b/guides/welcome-guide/index.html new file mode 100644 index 0000000..a65150f --- /dev/null +++ b/guides/welcome-guide/index.html @@ -0,0 +1,2419 @@ + + + + + + + + + + + + + + + + + + + + + + + + Welcome! - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Welcome!

+ +

Welcome to the phospho platform guides. If you're unsure where to start, check out our getting started guide. If you're looking for a deeper dive, you'll find everything you need below.

+

Check out this video for a quick introduction to the platform.

+

+

+

Monitor interactions between your LLM app and your users. Explore conversation topics and leverage real-time data. Get AI analytics and product-level insights to improve your LLM app.

+

Keywords: logging, automatic evaluations, experiments, A/B tests, user feedback, testing

+

Guides to get you started

+
+
    +
  • +

    Get started

    +
    +

    Add text analytics in your LLM app in a blitz. Quick and easy setup

    +

    Learn More

    +
  • +
  • +

    LLM as a judge

    +
    +

    Leverage LLM as a judge techniques to analyze your LLM app's performance. Simple setup

    +

    Learn More

    +
  • +
  • +

    Figure out User Intentions

    +
    +

    Figure out what your users are talking about. See through the fog

    +

    Learn More

    +
  • +
  • +

    Understand your data

    +
    +

    Get insights on your data through visualization, clustering, and more. Insights and analytics

    +

    Learn More

    +
  • +
+
+

Eager to see it in action? Get started in minutes.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/phospho-mkdocs/docs/images/clustering-demo.gif b/images/clustering-demo.gif similarity index 100% rename from phospho-mkdocs/docs/images/clustering-demo.gif rename to images/clustering-demo.gif diff --git a/phospho-mkdocs/docs/images/explore/abtest.jpeg b/images/explore/abtest.jpeg similarity index 100% rename from phospho-mkdocs/docs/images/explore/abtest.jpeg rename to images/explore/abtest.jpeg diff --git a/phospho-mkdocs/docs/images/explore/events detection/Create event.png b/images/explore/events detection/Create event.png similarity index 100% rename from phospho-mkdocs/docs/images/explore/events detection/Create event.png rename to images/explore/events detection/Create event.png diff --git a/phospho-mkdocs/docs/images/explore/events detection/Event suggestion.png b/images/explore/events detection/Event suggestion.png similarity index 100% rename from phospho-mkdocs/docs/images/explore/events detection/Event suggestion.png rename to images/explore/events detection/Event suggestion.png diff --git a/phospho-mkdocs/docs/images/guides/LLM_judge/add_event.png b/images/guides/LLM_judge/add_event.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/LLM_judge/add_event.png rename to images/guides/LLM_judge/add_event.png diff --git a/phospho-mkdocs/docs/images/guides/LLM_judge/events_page.png b/images/guides/LLM_judge/events_page.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/LLM_judge/events_page.png rename to images/guides/LLM_judge/events_page.png diff --git a/phospho-mkdocs/docs/images/guides/getting_started/add_event.png b/images/guides/getting_started/add_event.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/getting_started/add_event.png rename to images/guides/getting_started/add_event.png diff --git a/phospho-mkdocs/docs/images/guides/getting_started/clusters.png b/images/guides/getting_started/clusters.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/getting_started/clusters.png rename to images/guides/getting_started/clusters.png diff --git a/phospho-mkdocs/docs/images/guides/getting_started/detect_events.png b/images/guides/getting_started/detect_events.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/getting_started/detect_events.png rename to images/guides/getting_started/detect_events.png diff --git a/phospho-mkdocs/docs/images/guides/getting_started/filters.png b/images/guides/getting_started/filters.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/getting_started/filters.png rename to images/guides/getting_started/filters.png diff --git a/phospho-mkdocs/docs/images/guides/getting_started/import_data.png b/images/guides/getting_started/import_data.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/getting_started/import_data.png rename to images/guides/getting_started/import_data.png diff --git a/phospho-mkdocs/docs/images/guides/getting_started/settings.png b/images/guides/getting_started/settings.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/getting_started/settings.png rename to images/guides/getting_started/settings.png diff --git a/phospho-mkdocs/docs/images/guides/user-intentions/clusters.png b/images/guides/user-intentions/clusters.png similarity index 100% rename from phospho-mkdocs/docs/images/guides/user-intentions/clusters.png rename to images/guides/user-intentions/clusters.png diff --git a/phospho-mkdocs/docs/images/hero-dark.svg b/images/hero-dark.svg similarity index 100% rename from phospho-mkdocs/docs/images/hero-dark.svg rename to images/hero-dark.svg diff --git a/phospho-mkdocs/docs/images/hero-light.svg b/images/hero-light.svg similarity index 100% rename from phospho-mkdocs/docs/images/hero-light.svg rename to images/hero-light.svg diff --git a/phospho-mkdocs/docs/images/import/api_key_langsmith.png b/images/import/api_key_langsmith.png similarity index 100% rename from phospho-mkdocs/docs/images/import/api_key_langsmith.png rename to images/import/api_key_langsmith.png diff --git a/phospho-mkdocs/docs/images/import/import_data.png b/images/import/import_data.png similarity index 100% rename from phospho-mkdocs/docs/images/import/import_data.png rename to images/import/import_data.png diff --git a/phospho-mkdocs/docs/images/import/langfuse_api_keys.png b/images/import/langfuse_api_keys.png similarity index 100% rename from phospho-mkdocs/docs/images/import/langfuse_api_keys.png rename to images/import/langfuse_api_keys.png diff --git a/phospho-mkdocs/docs/images/import/start_sending_data.png b/images/import/start_sending_data.png similarity index 100% rename from phospho-mkdocs/docs/images/import/start_sending_data.png rename to images/import/start_sending_data.png diff --git a/phospho-mkdocs/docs/images/supabase/create_webhook_1.png b/images/supabase/create_webhook_1.png similarity index 100% rename from phospho-mkdocs/docs/images/supabase/create_webhook_1.png rename to images/supabase/create_webhook_1.png diff --git a/phospho-mkdocs/docs/images/supabase/create_webhook_2.png b/images/supabase/create_webhook_2.png similarity index 100% rename from phospho-mkdocs/docs/images/supabase/create_webhook_2.png rename to images/supabase/create_webhook_2.png diff --git a/phospho-mkdocs/docs/images/supabase/secrets_edge_functions.png b/images/supabase/secrets_edge_functions.png similarity index 100% rename from phospho-mkdocs/docs/images/supabase/secrets_edge_functions.png rename to images/supabase/secrets_edge_functions.png diff --git a/phospho-mkdocs/docs/images/supabase/webhook_tab.png b/images/supabase/webhook_tab.png similarity index 100% rename from phospho-mkdocs/docs/images/supabase/webhook_tab.png rename to images/supabase/webhook_tab.png diff --git a/import-data/api-integration/index.html b/import-data/api-integration/index.html new file mode 100644 index 0000000..c51166a --- /dev/null +++ b/import-data/api-integration/index.html @@ -0,0 +1,2716 @@ + + + + + + + + + + + + + + + + + + + + + + + + Setup logging in your app - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Setup logging in your app

+ +

You can setup the logging to phospho in your app in a few minutes.

+

1. Get your phospho API key and your project id

+

Go to the phospho platform. Login or create an account if you don't have one.

+

If this is your first time using phospho, a Default project has been created for you. On the main page, note down the project id and follow the link to create a new API key.

+

If you already have a project, go to Settings. Your project id is displayed on the top of the page. To create an API key, click on the Manage Organization & API keys button. Store your API key safely!

+

2. Setup phospho logging in your app

+

Add environment variables

+

In your code, add the following environment variables:

+
export PHOSPHO_API_KEY="your_api_key"
+export PHOSPHO_PROJECT_ID="your_project_id"
+
+

Log to phospho

+

The basic abstraction of phospho is the task. If you're a programmer, you can think of tasks like a function.

+
    +
  • input (str): The text that goes into the system. Eg: the user message.
  • +
  • output (Optional[str]): The text that comes out of the system. Eg: the system response.
  • +
+

We prefer to use this abstraction because of its flexibility. You can log any text to a task, not just chat messages: call to an LLM, answering a question, searching in documents, summarizing a text, performing inference of a model, steps of a chain-of-thought...

+

Tasks can be grouped into sessions. Tasks and Sessions can be attached to users.

+

How to setup logging?

+
+
+
+

The phospho Python module in the easiest way to log to phospho. It is compatible with Python 3.9+.

+
pip install --upgrade phospho
+
+

To log tasks, use phospho.log. The logged tasks are analyzed by the phospho analytics pipeline.

+
import phospho
+
+# By default, phospho reads the PHOSPHO_API_KEY and PHOSPHO_PROJECT_ID from the environment variables
+phospho.init()
+
+# Example
+input = "Hello! This is what the user asked to the system"
+output = "This is the response showed to the user by the app."
+
+# This is how you log a task to phospho
+phospho.log(
+  input=input,
+  output=output,
+  # Optional: for chats, group tasks together in sessions
+  # session_id = "session_1",
+  # Optional: attach tasks to users
+  # user_id = "user_1",
+  # Optional: add metadata to the task
+  # metadata = {"system_prompt": "You are a helpful assistant."},
+)
+
+
+
    +
  • +

    More about logging in Python

    +
    +

    Did you know you could log OpenAI completions, streaming outputs and metadata? Learn more by clicking here.

    +

    Read more

    +
  • +
+
+
+
+

The phospho JavaScript module is the easiest way to log to phospho. It is compatible with Node.js.

+

Types are available for your Typescript codebase.

+
npm i phospho
+
+

To log tasks, use phospho.log. The logged tasks are analyzed by the phospho analytics pipeline.

+
import { phospho } from "phospho";
+
+// By default, phospho reads the PHOSPHO_API_ID and PHOSPHO_PROJECT_KEY from the environment variables
+phospho.init();
+
+// Example
+const input = "Hello! This is what the user asked to the system";
+const output = "This is the response showed to the user by the app.";
+
+// This is how you log a task to phospho
+phospho.log({
+  input,
+  output,
+  // Optional: for chats, group tasks together in sessions
+  // session_id: "session_1",
+  // Optional: attach tasks to users
+  // user_id: "user_1",
+  // Optional: add metadata to the task
+  // metadata: { system_prompt: "You are a helpful assistant." },
+});
+
+
+
    +
  • +

    More about logging in Javascript

    +
    +

    Did you know you could log OpenAI completions, streaming outputs and metadata? Learn more by clicking here.

    +

    Read more

    +
  • +
+
+
+
+

You can directly log to phospho using the /log endpoint of the API.

+
curl -X POST https://api.phospho.ai/v2/log/$PHOSPHO_PROJECT_ID \
+-H "Authorization: Bearer $PHOSPHO_API_KEY" \
+-H "Content-Type: application/json" \
+-d '{
+    "batched_log_events": [
+        {
+            "input": "your_input",
+            "output": "your_output",
+            "session_id": "session_1",
+            "user_id": "user_1",
+            "metadata": {"system_prompt": "You are a helpful assistant."},
+        }
+    ]
+}'
+
+
+

Info

+

The session_id, user_id and metadata fields are optional.

+
+
+
    +
  • +

    API reference

    +
    +

    Create a tailored integration with the API. Learn more by clicking here.

    +

    Read more

    +
  • +
+
+
+
+

We provide a Langchain callback in our Python module.

+
pip install --upgrade phospho
+
+
from phospho.integrations import PhosphoLangchainCallbackHandler
+
+chain = ... # Your Langchain agent or chain
+
+chain.invoke(
+    "Your chain input",
+    # Add the callback handler to the config
+    config={"callbacks": [PhosphoLangchainCallbackHandler()]},
+)
+
+
+
    +
  • +

    Langchain guide

    +
    +

    Customize what is logged to phospho by customizing the callback. Learn more by clicking here.

    +

    Read more

    +
  • +
+
+
+
+

Integrate phospho to your Supabase app is as simple as using the phospho API.

+
+

Note

+

Follow the Supabase guide to leverage the power of product analytics in your +Supabase app!

+
+
+
    +
  • +

    Read the supabase guide

    +
    +

    Get started with Supabase and phospho. Learn more by clicking here.

    +

    Read more

    +
  • +
+
+
+
+
+

3. Get insights in the dashboard

+

phospho run analytics pipelines on the messages logged. Discover the insights in the phospho dashboard.

+

Next steps

+
+
    +
  • +

    Automatic tagging

    +
    +

    Automatically annotate your text data and be alerted. Take action.

    +

    Learn more

    +
  • +
  • +

    Unsupervised clustering

    +
    +

    Group users' messages based on their intention. Find out what your users are talking about.

    +

    Learn more

    +
  • +
  • +

    AB Testing

    +
    +

    Run experiments and iterate on your LLM app, while keeping track of performances. Keep shipping.

    +

    Learn more

    +
  • +
  • +

    Flexible evaluation pipeline

    +
    +

    Discover how to run and design a text analytics pipeline using natural language. No code needed.

    +

    Learn more

    +
  • +
  • +

    User analytics

    +
    +

    Detect user languages, sentiment, and more. Get to know power users.

    +

    Learn more

    +
  • +
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/import-data/import-file/index.html b/import-data/import-file/index.html new file mode 100644 index 0000000..deb18b3 --- /dev/null +++ b/import-data/import-file/index.html @@ -0,0 +1,2355 @@ + + + + + + + + + + + + + + + + + + + + + + + + Import a CSV or Excel file - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Format your file

+

Your CSV or Excel file need to have the following columns:

+
    +
  • input : the input text data, ususally the user message
  • +
  • output : the output text, ususally the LLM app response
  • +
+

Additonally, you can add the following columns:

+
    +
  • task_id: an id of the task (input/output couple)
  • +
  • session_id: an id of the session. Messages with the same session_id will be grouped together in a single session
  • +
  • created_at: the creation date of the task (format it like "2021-09-01 12:00:00")
  • +
+

The maximum upload size with this method is 500MB.

+

Upload your file to the plateform

+

Click the setting icon at the top right of the screen and select Import data.

+

Import data

+

Then click, the Upload dataset button and use Choose file button to select your file.

+

Choose file

+

Your tasks will be populated in your project in a minute. You might need to refresh the page to see them.

+

Next steps

+ + + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/import-data/import-langfuse/index.html b/import-data/import-langfuse/index.html new file mode 100644 index 0000000..e0a3c66 --- /dev/null +++ b/import-data/import-langfuse/index.html @@ -0,0 +1,2359 @@ + + + + + + + + + + + + + + + + + + + + + + + + Import from Langfuse ๐Ÿชข - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Go to Langfuse and head to settings

+

Go to your langfuse account and head to the settings page, in the bottom left.

+

You will reach the API Keys page where you can create a new API key.

+

langfuse api key

+

Click on Create new API keys, you will need both the secret key and the public key.

+

Head to phospho and import your data

+

Click the settings icon at the top right of the screen and select Import data.

+

Click the settings icon

+

Then click, the Import from langfuse button.

+

You can now copy your Secret Key and your Public Key in the input fields.

+

+ This data is encrypted and stored securely. We need it to periodically fetch + your data from LangFuse and import it into phospho. +

+

Import from langfuse

+

Your data will be synced to your project in a minute.

+

Next steps

+

Default evaluators like language and sentiment will be run on messages. To create more events and to run them on your data, see the event detection page

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/import-data/import-langsmith/index.html b/import-data/import-langsmith/index.html new file mode 100644 index 0000000..ebab62b --- /dev/null +++ b/import-data/import-langsmith/index.html @@ -0,0 +1,2361 @@ + + + + + + + + + + + + + + + + + + + + + + + + Import from Langsmith ๐Ÿฆœ๐Ÿ”— - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Go to Langsmith and head to settings

+

Go to your langsmith account and head to the settings page in the bottom left.

+

You will reach the API Keys page where you can create a new API key in the top right corner.

+

langsmith api key

+

Create a new API key and copy it.

+

Head to phospho and import your data

+

Click the settings icon at the top right of the screen and select Import data.

+

Click the settings icon

+

Then click, the Import from langsmith button.

+

You can now copy your API key in the input field and enter the name of your langsmith project to copy.

+

+ This data is encrypted and stored securely. We need it to periodically fetch + your data from LangSmith and import it into phospho. +

+

Import from langsmith

+

Data will be synced to your project in a minute.

+

Next steps

+

Default evaluators like language and sentiment will be run on your data. To create more events and to run them on your data, see the event detection page

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/import-data/tracing/index.html b/import-data/tracing/index.html new file mode 100644 index 0000000..95ae924 --- /dev/null +++ b/import-data/tracing/index.html @@ -0,0 +1,2678 @@ + + + + + + + + + + + + + + + + + + + + + + + + Log intermediate steps - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + + +

Log intermediate steps

+ +

To help you debug and deep dive into your LLM apps logs, you can set up tracing using the phospho library.

+

This traces every intermediate steps of your LLM app pipeline, from the input text to the output text.

+

Setup

+

Install phospho

+
+

Info

+

This feature is currently only available for Python. NodeJS version coming soon!

+
+

Make sure you have the phospho module installed:

+
pip install -U phospho
+
+

Install OpenTelemetry instrumentations

+

phospho leverages OpenTelemetry instrumentations to trace your LLM app pipeline. To trace a library, you need to install the corresponding instrumentation.

+

For example, here is how to trace OpenAI and Mistral API calls:

+
# This will trace OpenAI API calls
+pip install opentelemetry-instrumentation-openai
+# This will trace MistralAI API calls
+pip install opentelemetry-instrumentation-mistralai
+
+

Refer to this list of available instrumentations to find the one that fits your needs.

+

Initialize phospho

+

Initialize phospho with phospho.init() and enable tracing with tracing=True:

+
import phospho
+
+phospho.init(tracing=True)
+
+

Automatic tracing

+

All calls to the installed instrumentations are traced.

+

For example, when you do phospho.log, the OpenAI API calls will be linked to this log.

+
import phospho
+
+phospho.init(tracing=True)
+
+# This is your LLM app code
+openai_client = OpenAI()
+color = openai_client.chat.completions.create(
+    messages=[{"role": "user", "content": "Say a color"}],
+    model="gpt-4o-mini"
+)
+animal = openai_client.chat.completions.create(
+    messages=[{"role": "user", "content": "Say an animal"}],
+    model="gpt-4o-mini",
+)
+
+# This is how you log to phospho
+# All the API calls made by the OpenAI client will me linked to this log
+phospho.log(
+    input="Give me a color and an animal",
+    output=f"Color: {color}, Animal: {animal}",
+)
+
+

You can view intermediate steps in the Phospho dashboard when reading a message transcript.

+

In the automatic tracing mode, the link between API calls and logs is done using the timestamps. If you want more control, you can use the context tracing or manual tracing.

+

Context tracing

+

To have more control over which instrumentations calls are linked to which logs, define a context using the phospho.tracer() context block or @phospho.trace() decorator syntax.

+

Context block

+

This links all calls to the instrumentations made inside the context block to the phospho log. For example, this will link the OpenAI API call to the log:

+
with phospho.tracer(): 
+    messages = [{"role": "user", "content": "Say good bye"}]
+    openai_client.chat.completions.create(
+        messages=messages,
+        model="gpt-4o-mini",
+        max_tokens=1,
+    )
+    phospho.log(input="Say good bye", output=response)
+
+

To add session_id, task_id and metadata, pass them as arguments to the context block:

+
with phospho.tracer(
+    task_id="some_id", 
+    session_id="my_session_id", 
+    metadata={"user_id": "bob"}
+): 
+    messages = [{"role": "user", "content": "Say good bye"}]
+    openai_client.chat.completions.create(
+        messages=messages,
+        model="gpt-4o-mini",
+        max_tokens=1,
+    )
+    phospho.log(input="Say good bye", output=response)
+
+

Decorator syntax

+

This works the same way as the context block.

+
@phospho.trace()
+def my_function():
+    messages = [{"role": "user", "content": "Say good bye"}]
+    openai_client.chat.completions.create(
+        messages=messages,
+        model="gpt-4o-mini",
+        max_tokens=1,
+    )
+    phospho.log(input="Say good bye", output=response)
+
+my_function()
+
+
+

Note

+

The context is phospho.tracer, while the decorator is phospho.trace, without the r.

+
+

To add session_id, task_id and metadata, pass them as arguments to the decorator:

+
@phospho.trace(
+    task_id="some_id", 
+    session_id="my_session_id", 
+    metadata={"user_id": "bob"}
+)
+def my_function():
+    messages = [{"role": "user", "content": "Say good bye"}]
+    openai_client.chat.completions.create(
+        messages=messages,
+        model="gpt-4o-mini",
+        max_tokens=1,
+    )
+    phospho.log(input="Say good bye", output=response)
+
+

Manual tracing

+

Pass intermediate steps as a steps parameter to phospho.log to trace your pipeline:

+
phospho.log(
+    input="Give me a color and an animal",
+    output=f"Color: {color}, Animal: {animal}",
+    steps=[
+        {"name": "OpenAI API call", "input": "Say a color", "output": color},
+        {"name": "OpenAI API call", "input": "Say an animal", "output": animal},
+    ]
+)
+
+

This is useful to trace custom modules, which don't have an Opentelemetry instrumentation available. For example, document retrieval, data augmentation, etc.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 0000000..0533d14 --- /dev/null +++ b/index.html @@ -0,0 +1,2486 @@ + + + + + + + + + + + + + + + + + + + + Welcome to the phospho platform documentation! - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Welcome to the phospho platform documentation!

+

The phospho platform is the open source text analytics platform for LLM apps. Understand your users and turn text into insights.

+
    +
  • Cluster text messages to understand user intents and use cases
  • +
  • Tag, score, and classify new messages
  • +
  • Set up evaluations to get quantified scores
  • +
  • A/B test your LLM app
  • +
+

Keywords: clustering, automatic evaluations, A/B tests, user analytics

+

+

+
+
    +
  • +

    Get started now

    +
    +

    Clusterize your text messages in 5 minutes. No code required.

    +

    Getting started

    +
  • +
+
+

How does it work?

+
    +
  1. +

    Import data
    + Import messages to phospho (e.g., what the user asked, what the assistant answered).

    +
  2. +
  3. +

    Run analysis
    + Cluster messages and run analysis on the messages. No code required.

    +
  4. +
  5. +

    Explore results
    + Visualize results on the phospho dashboard and export analytics results with integrations.

    +
  6. +
+
+
    +
  • +

    Get started now

    +
    +

    Clusterize your text messages in 5 minutes. No code required.

    +

    Getting started

    +
  • +
+
+

Key features

+
+
    +
  • +

    Cluster messages

    +
    +

    Group users' messages based on their intention. Find out what your users are talking about.

    +

    Clustering

    +
  • +
  • +

    Import data

    +
    +

    Log all the important data of your LLM app. Get started in minutes.

    +

    Importing data

    +
  • +
  • +

    Automatic tagging

    +
    +

    Automatically annotate your text data and be alerted. Take action.

    +

    Tagging

    +
  • +
  • +

    AB Testing

    +
    +

    Run experiments and iterate on your LLM app, while keeping track of performances. Keep shipping.

    +

    AB Testing

    +
  • +
  • +

    Flexible evaluation pipeline

    +
    +

    Discover how to run and design a text analytics pipeline using natural language. No code needed.

    +

    Evaluation pipeline

    +
  • +
  • +

    User analytics

    +
    +

    Detect user languages, sentiment, and more. Get to know power users.

    +

    User analytics

    +
  • +
+
+

Eager to see it in action? Get started in minutes.

+ + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/argilla/index.html b/integrations/argilla/index.html new file mode 100644 index 0000000..8735d1f --- /dev/null +++ b/integrations/argilla/index.html @@ -0,0 +1,2330 @@ + + + + + + + + + + + + + + + + + + + + + + + + Export your data to Argilla - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Export your data to Argilla

+ +
+

Info

+

This feature is in preview. Contact us if you would like to try it out!

+
+

Argilla is a data annotation tool that allows you to label your data with ease.

+

You can export your data to an Argilla dataset by clicking on the "Export" button in the integration tab.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/javascript/logging/index.html b/integrations/javascript/logging/index.html new file mode 100644 index 0000000..7268358 --- /dev/null +++ b/integrations/javascript/logging/index.html @@ -0,0 +1,2713 @@ + + + + + + + + + + + + + + + + + + + + + + + + Log to phospho with Javascript - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Log to phospho with Javascript

+ +

Log tasks to phospho

+

Tasks are the basic bricks that make up your LLM apps. If you're a programmer, you can think of tasks like functions.

+

A task is made of at least two things:

+
    +
  • input (string): What goes into a task. Eg: what the user asks to the assistant.
  • +
  • output (string?): What goes out of the task. Eg: what the assistant replied to the user.
  • +
+

Example of tasks you can log to phospho:

+
    +
  • Call to an LLM (input = query, output = llm response)
  • +
  • Answering a question (input = question, output = answer)
  • +
  • Searching in documents (input = search query, output = document)
  • +
  • Summarizing a text (input = text, output = summary)
  • +
  • Performing inference of a model (input = X, output = y)
  • +
+

Install the phospho module

+

The phospho JavaScript module is the easiest way to log to phospho. It is compatible with Node.js.

+

Types are available for your Typescript codebase.

+
npm i phospho
+# with yarn
+yarn add phospho
+
+
+

Info

+

The phospho module is an open source work in progress. Your help is deeply +appreciated!

+
+

Initialize phospho

+

In your app, initialize the phospho module. By default, phospho will look for PHOSPHO_API_KEY and PHOSPHO_PROJECT_ID environment variables.

+
+

Tip

+

Learn how to get your api key and project id by clicking +here!

+
+
import { phospho } from "phospho";
+
+phospho.init();
+
+

You can also pass the api_key and project_id parameters to phospho.init.

+
// Initialize phospho
+phospho.init({ apiKey: "api_key", projectId: "project_id" });
+
+

Log with phospho.log

+

The most minimal way to log a task is to use phospho.log.

+

Logging text inputs and outputs

+
const question = "What's the capital of Fashion?";
+
+const myAgent = (query) => {
+  // Here, you'd do complex stuff.
+  // But for this example we'll just return the same answer every time.
+  return "It's Paris of course.";
+};
+
+// Log events to phospho by passing strings directly
+phospho.log({
+  input: question,
+  output: myAgent(question),
+});
+
+

Note that the output is optional. If you don't pass an output, phospho will log null.

+

Logging OpenAI queries and responses

+

phospho aims to be battery included. So if you pass something else than a string to phospho.log, phospho extracts what's usually considered "the input" or "the output".

+

For example, if you use the OpenAI API:

+
// If you pass full OpenAI queries and results to phospho, it will extract the input and output for you.
+const question = "What's the capital of Fashion?";
+const query = {
+  model: "gpt-3.5-turbo",
+  temperature: 0,
+  seed: 123,
+  messages: [
+    {
+      role: "system",
+      content:
+        "You are an helpful frog who gives life advice to people. You say *ribbit* at the end of each sentence and make other frog noises in between. You answer shortly in less than 50 words.",
+    },
+    {
+      role: "user",
+      content: question,
+    },
+  ],
+  stream: false,
+};
+const result = openai.chat.completions.create(query);
+const loggedContent = await phospho.log({ input: query, output: result });
+
+// Look at the fields "input" and "output" in the logged content
+// Original fields are in "raw_input" and "raw_output"
+console.log("The following content was logged to phospho:", loggedContent);
+
+

Custom extractors

+

Pass custom extractors to phospho.log to extract the input and output from any object. The original object will be converted to a dict (if jsonable) or a string and stored in raw_input and raw_output.

+
phospho.log({
+  input: { custom_input: "this is a complex object" },
+  output: { custom_output: "which is not a string nor a standard object" },
+  // Custom extractors return a string
+  inputToStrFunction: (x) => x.custom_input,
+  outputToStrFunction: (x) => x.custom_output,
+});
+
+

Logging additional metadata

+

You can log additional data with each interaction (user id, version id,...) by passing arguments to phospho.log.

+
log = phospho.log({
+  input: "log this",
+  output: "and that",
+  // There is a metadata field
+  metadata: { always: "moooore" },
+  // Every extra keyword argument is logged as metadata
+  log_anything_and_everything: "even this is ok",
+});
+
+

Streaming

+

phospho supports streamed outputs. This is useful when you want to log the output of a streaming API.

+

Example with phospho.log

+

Pass stream: true to phospho.log to handle streaming responses. When iterating over the response, phospho will automatically log each chunk until the iteration is completed.

+

For example, you can pass streaming OpenAI responses to phospho.log the following way:

+
// This should also work with streaming
+const question = "What's the capital of Fashion?";
+const query = {
+  model: "gpt-3.5-turbo",
+  temperature: 0,
+  seed: 123,
+  messages: [
+    {
+      role: "system",
+      content:
+        "You are an helpful frog who gives life advice to people. You say *ribbit* at the end of each sentence and make other frog noises in between. You answer shortly in less than 50 words.",
+    },
+    {
+      role: "user",
+      content: question,
+    },
+  ],
+  stream: true,
+};
+const streamedResult = await openai.chat.completions.create(query);
+
+phospho.log({ input: query, output: streamedResult, stream: true });
+
+for await (const chunk of streamedResult) {
+  process.stdout.write(chunk.choices[0]?.delta?.content || "");
+}
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/langchain/index.html b/integrations/langchain/index.html new file mode 100644 index 0000000..57ae18a --- /dev/null +++ b/integrations/langchain/index.html @@ -0,0 +1,2577 @@ + + + + + + + + + + + + + + + + + + + + + + + + Log to phospho in Python Langchain - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Log to phospho in Python Langchain

+ +

phospho can be added to a Langchain agent as a callback handler. By default, the task input is the beginning of the chain, and the task output is the end result. Intermediate steps are also logged.

+
from phospho.integrations import PhosphoLangchainCallbackHandler
+
+chain = ... # Your Langchain agent or chain
+
+chain.invoke(
+    "Your chain input",
+    # Add the callback handler to the config
+    config={"callbacks": [PhosphoLangchainCallbackHandler()]},
+)
+
+

Detailed setup in a retrieval agent

+

1. Setup

+

Set the following environment variables:

+
export PHOSPHO_API_KEY=...
+export PHOSPHO_PROJECT_ID=...
+export OPENAI_API_KEY=...
+
+
+

Tip

+

Learn how to get your project id and api key by clicking +here!

+
+

Install requirements:

+
pip install phospho openai langchain faiss-cpu
+
+

2. Add callback

+

The phospho module implements the Langchain callback as well as other helpful tools to interact with phospho. Learn more in the python doc.

+
+

Info

+

The phospho module is an open source work in progress. Your help is deeply +appreciated!

+
+

For example, let's create a file called main.py with the agent code.

+

phospho is integrated with langchain via the PhosphoLangchainCallbackHandler callback handler. This callback handler will log the input and output of the agent to phospho.

+
from langchain.prompts import ChatPromptTemplate
+from langchain_community.chat_models import ChatOpenAI
+from langchain_community.embeddings import OpenAIEmbeddings
+from langchain_community.vectorstores import FAISS
+from langchain_core.output_parsers import StrOutputParser
+from langchain_core.runnables import RunnablePassthrough
+
+vectorstore = FAISS.from_texts(
+    [
+        "phospho is the LLM analytics platform",
+        "Paris is the capital of Fashion (sorry not sorry London)",
+        "The Concorde had a maximum cruising speed of 2,179 km (1,354 miles) per hour, or Mach 2.04 (more than twice the speed of sound), allowing the aircraft to reduce the flight time between London and New York to about three hours.",
+    ],
+    embedding=OpenAIEmbeddings(),
+)
+retriever = vectorstore.as_retriever()
+template = """Answer the question based only on the following context:
+{context}
+
+Question: {question}
+"""
+prompt = ChatPromptTemplate.from_template(template)
+model = ChatOpenAI()
+
+retrieval_chain = (
+    {"context": retriever, "question": RunnablePassthrough()}
+    | prompt
+    | model
+    | StrOutputParser()
+)
+
+
+# To integrate with Phospho, add the following callback handler
+
+from phospho.integrations import PhosphoLangchainCallbackHandler
+
+
+while True:
+    text = input("Enter a question: ")
+    response = retrieval_chain.invoke(
+        text, 
+        config={
+            "callbacks": [PhosphoLangchainCallbackHandler()]
+        }
+    )
+    print(response)
+
+

The integration with phospho is done by adding the PhosphoLangchainCallbackHandler to the config of the chain. You can learn more about callbacks in the langchain doc.

+

3. Test

+

Start the RAG agent and ask questions about the documents.

+
python main.py
+
+

The agent answers question based on retrieved documents (RAG, Retrieval Augmented Generation).

+
Enter a question: What's the top speed of the Concorde?
+The Concorde top speed is 2,179km per hour.
+
+

The conversation and the intermediate retrievals steps (such as the documents retrieved) are logged to phospho.

+

Custom logging in langchain

+

For more advanced manual logging with a langchain, you can inherit from the PhosphoLangchainCallbackHandler and add custom behaviour.

+

The callback has a reference to the phospho object, which can be used to log custom data.

+
from phospho.integrations import PhosphoLangchainCallbackHandler
+
+class MyCustomLangchainCallbackHandler(PhosphoLangchainCallbackHandler):
+
+    def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:
+        """Run on agent end."""
+
+        # Do something custom here
+        self.phospho.log(input="...", output="...")
+
+

You can refer to the langchain doc to have the full list of callbacks available.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/postgresql/index.html b/integrations/postgresql/index.html new file mode 100644 index 0000000..bb5f2ab --- /dev/null +++ b/integrations/postgresql/index.html @@ -0,0 +1,2330 @@ + + + + + + + + + + + + + + + + + + + + + + + + Export your data to PostgreSQL - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Export your data to PostgreSQL

+ +
+

Info

+

This feature is in preview. Contact us if you would like to try it out!

+
+

You can export your data to a PostgreSQL database by clicking on the "Export" button in the integration tab.

+

Your data will be synced every 24 hours.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/powerbi/index.html b/integrations/powerbi/index.html new file mode 100644 index 0000000..bd905d1 --- /dev/null +++ b/integrations/powerbi/index.html @@ -0,0 +1,2331 @@ + + + + + + + + + + + + + + + + + + + + + + + + Export your data to PowerBI - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Export your data to PowerBI

+ +
+

Info

+

This feature is in preview. Contact us if you would like to try it out!

+
+

You can export your data to PowerBI by clicking on the "Export" button in the integration tab.

+

This will populate a SQL database with your data. You can then connect PowerBI to this database and create a report.

+

Your data will be synced every 24 hours.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/python/analytics/index.html b/integrations/python/analytics/index.html new file mode 100644 index 0000000..67493ce --- /dev/null +++ b/integrations/python/analytics/index.html @@ -0,0 +1,2876 @@ + + + + + + + + + + + + + + + + + + + + + + + + Analyze your logs in Python - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Analyze your logs in Python

+ +

Use the phospho Python package to run custom analytics jobs on your logs.

+

Setup

+

Instal the package and set your API key and project ID as environment variables.

+
pip install phospho pandas
+export PHOSPHO_API_KEY=your_api_key
+export PHOSPHO_PROJECT_ID=your_project_id
+
+

Load logs as a DataFrame

+

The best way to analyze your logs is to load them into a pandas DataFrame. This format is compatible with most analytics libraries.

+

One row = one (task, event) pair

+

Phospho provides a tasks_df function to load the logs into a flattened DataFrame. Note that you need to have the pandas package installed to use this function.

+
import phospho
+
+phospho.init()
+phospho.tasks_df(limit=1000) # Load the latest 1000 tasks
+
+

This will return a DataFrame where one row is one (task, event) pair.

+

Example:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
task_idtask_inputtask_outputtask_metadatatask_evaltask_eval_sourcetask_eval_attask_created_atsession_idsession_lengthevent_nameevent_created_at
b58aacc6102f4a5e9d2364202ce23bf2Some inputSome output{'client_created_at': 1709925970, 'last_update...successowner2024-03-08 19:27:492024-03-09 15:09:3171ee278ab2874666ae157c28a69c16792correction by user2024-03-08 19:27:43
b58aacc6102f4a5e9d2364202ce23bf2Some inputSome output{'client_created_at': 1709925970, 'last_update...successowner2024-03-08 19:27:492024-03-09 15:09:3171ee278ab2874666ae157c28a69c16792user frustration indication2024-03-08 19:27:43
b58aacc6102f4a5e9d2364202ce23bf2Some inputSome output{'client_created_at': 1709925970, 'last_update...successowner2024-03-08 19:27:492024-03-09 15:09:3171ee278ab2874666ae157c28a69c16792follow-up question2024-03-08 19:27:43
+

This means that:

+
    +
  • If a task has multiple events, there will be multiple rows with the same task_id and different event_name.
  • +
  • If a task has no events, it will have one row with event_name as None.
  • +
+

One row = one task

+

If you want one row to be one task, pass the parameter with_events=False.

+
phospho.tasks_df(limit=1000, with_events=False)
+
+

Result:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
task_idtask_inputtask_outputtask_metadatatask_evaltask_eval_sourcetask_eval_attask_created_atsession_idsession_length
21f3b21e8646402d930f1a02159e942fSome inputSome output{'client_created_at':42f'...failureowner2024-03-08 19:53:592024-03-09 16:45:18a6b1b4224f874608b6037d41d582286a2
64382c6093b04a028a97a14131a4ab32Some inputSome output{'client_created_at':42f'...successowner2024-03-08 19:27:482024-03-09 15:51:079d13562051a84d6c806d4e6f6a58fb371
b58aacc6102f4a5e9d2364202ce23bf2Some inputSome output{'client_created_at':42f'...successowner2024-03-08 19:27:492024-03-09 15:09:3171ee278ab2874666ae157c28a69c16793
+

Ignore session features

+

To ignore the sessions features, pass the parameter with_sessions=False.

+
phospho.tasks_df(limit=1000, with_sessions=False)
+
+

Run custom analytics jobs

+

To run custom analytics jobs, you can leverage all the power of the Python ecosystem.

+

If you have a lot of complex ML models to run and LLM calls to make, consider the phospho lab that streamlines some of the work for you.

+

+Set up the phospho lab to run custom analytics jobs on your logs +

+

Update logs from a DataFrame

+

After running your analytics jobs, you might want to update the logs with the results.

+

You can use the push_tasks_df function to push the updated data back to Phospho. This will override the specified fields in the logs.

+
# Fetch the 3 latest tasks
+tasks_df = phospho.tasks_df(limit=3)
+
+

Update columns

+

Make changes to columns. Not all columns are updatable. This is to prevent accidental data loss.

+

Here is the list of updatable columns:

+
    +
  • task_eval: Literal["success", "failure"]
  • +
  • task_eval_source: str
  • +
  • task_eval_at: datetime
  • +
  • task_metadata: Dict[str, object] (Note: this will override the whole metadata object, not just the specified keys)
  • +
+

+If you need to update more fields, feel free to open an issue on the GitHub repository, submit a PR, or directly reach out. +

+
# Make some changes
+tasks_df["task_eval"] = "success"
+tasks_df["task_metadata"] = tasks_df["task_metadata"].apply(
+    # To avoid overriding the whole metadata object, use **x to unpack the existing metadata
+    lambda x: {**x, "new_key": "new_value", "stuff": 44}
+)
+
+

Push updated data

+

To push the updated data back to Phospho, use the push_tasks_df function.

+
    +
  • You need to pass the task_id
  • +
  • As a best practice, pass only the columns you want to update.
  • +
+
# Select only the columns you want to update
+phospho.push_tasks_df(tasks_df[["task_id", "task_eval"]])
+
+# To check that the data has been updated
+phospho.tasks_df(limit=3)
+
+

You're all set. Your custom analytics are now also available in the Phospho UI.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/python/examples/openai-agent/index.html b/integrations/python/examples/openai-agent/index.html new file mode 100644 index 0000000..6217c15 --- /dev/null +++ b/integrations/python/examples/openai-agent/index.html @@ -0,0 +1,2479 @@ + + + + + + + + + + + + + + + + + + + + + + OpenAI CLI agent - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

OpenAI agent

+

This is an example of a minimal OpenAI assistant in the console. Every interaction is logged to phospho.

+

It demonstrates how to use phospho.wrap() with streaming content.

+

Installation

+
pip install --upgrade phospho openai
+
+

Setup

+

Create a .env file: +

PHOSPHO_PROJECT_ID=...
+PHOSPHO_API_KEY=...
+OPENAI_API_KEY=...
+

+

If you don't have a phospho API key and project ID, go to Getting Started for the step by step instructions.

+

Implementation

+

In assistant.py, add the following code:

+
import phospho
+import openai
+
+from dotenv import load_dotenv
+
+load_dotenv()
+
+phospho.init()
+openai_client = openai.OpenAI()
+
+messages = []
+
+print("Ask GPT anything (Ctrl+C to quit)", end="")
+
+while True:
+    prompt = input("\n>")
+    messages.append({"role": "user", "content": prompt})
+
+    query = {
+        "messages": messages,
+        "model": "gpt-3.5-turbo",
+        "stream": True,
+    }
+    response = openai_client.chat.completions.create(**query)
+
+    phospho.log(input=query, output=response, stream=True)
+
+    print("\nAssistant: ", end="")
+    for r in response:
+        text = r.choices[0].delta.content
+        if text is not None:
+            print(text, end="", flush=True)
+
+

Launch the script and chat with the agent.

+
python assistant.py
+
+

Go to the phospho dashboard to monitor the interactions.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/python/examples/openai-streamlit/index.html b/integrations/python/examples/openai-streamlit/index.html new file mode 100644 index 0000000..66d588b --- /dev/null +++ b/integrations/python/examples/openai-streamlit/index.html @@ -0,0 +1,2525 @@ + + + + + + + + + + + + + + + + + + + + + + OpenAI Streamlit agent - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Streamlit webapp with an OpenAI chatbot

+

This is a demo Streamlit webapp that showcases a simple assistant agent whose response are logged to phospho.

+

This demo shows how you can use phospho to log a complex stream of tokens.

+

Installation

+
pip install --upgrade phospho streamlit openai
+
+

Setup

+

Create a secrets file examples/.streamlit/secrets.toml with your OpenAI API key

+
PHOSPHO_PROJECT_ID=...
+PHOSPHO_API_KEY=...
+OPENAI_API_KEY="sk-..." # your actual key
+
+

Script

+
import streamlit as st
+import phospho
+from openai import OpenAI
+from openai.types.chat import ChatCompletionChunk
+from openai._streaming import Stream
+
+
+st.title("Assistant")  # Let's do an LLM-powered assistant !
+
+# Initialize phospho to collect logs
+phospho.init(
+    api_key=st.secrets["PHOSPHO_API_KEY"],
+    project_id=st.secrets["PHOSPHO_PROJECT_ID"],
+)
+
+# We will use OpenAI
+client = OpenAI(api_key=st.secrets["OPENAI_API_KEY"])
+
+# The messages between user and assistant are kept in the session_state (the browser's cache)
+if "messages" not in st.session_state:
+    st.session_state.messages = []
+
+# Initialize a session. A session is used to group interactions of a single chat
+if "session_id" not in st.session_state:
+    st.session_state.session_id = phospho.new_session()
+
+# Messages are displayed the following way
+for message in st.session_state.messages:
+    with st.chat_message(name=message["role"]):
+        st.markdown(message["content"])
+
+# This is the user's textbox for chatting with the assistant
+if prompt := st.chat_input("What is up?"):
+    # When the user sends a message...
+    new_message = {"role": "user", "content": prompt}
+    st.session_state.messages.append(new_message)
+    with st.chat_message("user"):
+        st.markdown(prompt)
+
+    # ... the assistant replies
+    with st.chat_message("assistant"):
+        message_placeholder = st.empty()
+        full_str_response = ""
+        # We build a query to OpenAI
+        full_prompt = {
+            "model": "gpt-3.5-turbo",
+            # messages contains the whole chat history
+            "messages": [
+                {"role": m["role"], "content": m["content"]}
+                for m in st.session_state.messages
+            ],
+            # stream asks to return a Stream object
+            "stream": True,
+        }
+        # The OpenAI module gives us back a stream object
+        streaming_response: Stream[
+            ChatCompletionChunk
+        ] = client.chat.completions.create(**full_prompt)
+
+        # ----> this is how you log to phospho
+        logged_content = phospho.log(
+            input=full_prompt,
+            output=streaming_response,
+            # We use the session_id to group all the logs of a single chat
+            session_id=st.session_state.session_id,
+            # Adapt the logging to streaming content
+            stream=True,
+        )
+
+        # When you iterate on the stream, you get a token for every response
+        for response in streaming_response:
+            full_str_response += response.choices[0].delta.content or ""
+            message_placeholder.markdown(full_str_response + "โ–Œ")
+
+        # If you don't want to log every streaming chunk, log only the final output.
+        # phospho.log(input=full_prompt, output=full_str_response, metadata={"stuff": "other"})
+        message_placeholder.markdown(full_str_response)
+
+    st.session_state.messages.append(
+        {"role": "assistant", "content": full_str_response}
+    )
+
+

Launch the webapp:

+
streamlit run webapp.py
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/python/logging/index.html b/integrations/python/logging/index.html new file mode 100644 index 0000000..a99cb62 --- /dev/null +++ b/integrations/python/logging/index.html @@ -0,0 +1,3062 @@ + + + + + + + + + + + + + + + + + + + + + + + + Log to phospho with Python - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Log to phospho with Python

+ +

Log tasks to phospho

+

phospho is a text analytics tool. To send text, you need to log tasks.

+

What's a task in phospho?

+

Tasks are the basic bricks that make up your LLM apps. If you're a programmer, you can think of tasks like functions.

+

A task is made of at least two things:

+
    +
  • input (str): What goes into a task. Eg: what the user asks to the assistant.
  • +
  • output (Optional[str]): What goes out of the task. Eg: what the assistant replied to the user.
  • +
+

The Task abstraction helps you structure your app and quickly explain what it does to an outsider: "Here's what goes in, here's what goes out."

+

It's the basic unit of text analytics. You can analyze the input and output of a task to understand the user's intent, the system's performance, or the quality of the response.

+

Examples of tasks

+
    +
  • Call to an LLM (input = query, output = llm response)
  • +
  • Answering a question (input = question, output = answer)
  • +
  • Searching in documents (input = search query, output = document)
  • +
  • Summarizing a text (input = text, output = summary)
  • +
  • Performing inference of a model (input = X, output = y)
  • +
+

How to log a task?

+

Install phospho module

+

The phospho Python module in the easiest way to log to phospho. It is compatible with Python 3.9+.

+
pip install --upgrade phospho
+
+

+ The phospho module is open source. Feel free to contribute! +

+

Initialize phospho

+

In your app, initialize the phospho module. By default, phospho will look for PHOSPHO_API_KEY and PHOSPHO_PROJECT_ID environment variables.

+
+

Tip

+

Learn how to get your api key and project id by clicking +here!

+
+
import phospho
+
+phospho.init()
+
+

You can also pass the api_key and project_id parameters to phospho.init.

+
phospho.init(api_key="phospho-key", project_id="phospho-project-id")
+
+

Log with phospho.log

+

To log messages to phospho, use phospho.log. This function logs a task to phospho. A task is a pair of input and output strings. The output is optional.

+

phospho is a text analytics tool. You can log any string input and output this way:

+
input_text = "Hello! This is what the user asked to the system"
+output_text = "This is the response showed to the user by the app."
+
+# This is how you log a task to phospho
+phospho.log(input=input_text, output=output_text)
+
+

The output is optional.

+

The input and output logged to phospho are displayed in the dashboard and used to perform text analytics.

+

Common use cases

+

Log OpenAI queries and responses

+

phospho aims to be battery included. So if you pass something else than a str to phospho.log, phospho extracts what's usually considered "the input" or "the output".

+

For example, you can pass to phospho.log the same input as the arguments for openai.chat.completions.create. And you can pass to phospho.log the same output as OpenAI's ChatCompletion objects.

+
import openai
+import phospho
+
+phospho.init()
+openai_client = openai.OpenAI(api_key="openai-key")
+
+input_prompt = "Explain quantum computers in less than 20 words."
+
+# This is your LLM app code
+query = {
+    "messages": [{"role": "system", "content": "You are a helpful assistant."},
+                 {"role": "user", "content": input_prompt},
+    ],
+    "model": "gpt-4o-mini",
+}
+response = openai_client.chat.completions.create(**query)
+
+# You can directly pass as dict or a ChatCompletion as input and output
+log = phospho.log(input=query, output=response)
+print("input:", log["input"])
+print("output:", log["output"])
+
+
input: Explain quantum computers in less than 20 words.
+output: Qubits harness quantum physics for faster, more powerful computation.
+
+

Note that the input is a dict.

+

Log a list of OpenAI messages

+

In conversational apps, your conversation history is often a list of messages with a role and a content. This is because it's the format expected by OpenAI's chat API.

+

You can directly log this messages list as an input or an output to phospho.log. The input, output, and system prompt are automatically extracted based on the messages' role.

+
# This is your conversation history in a chat app
+messages = [
+    {"role": "system", "content": "You are a helpful assistant."},
+    {"role": "user", "content": "Explain quantum computers in less than 20 words."},
+]
+
+# Your LLM app code generates a response
+response = openai_client.chat.completions.create(
+    messages=messages,
+    model="gpt-4o-mini",
+)
+
+# You append the response to the conversation history
+messages.append({"role": response.choices[0].role, "content": response.choices[0].message.content, } )
+
+# You can log the conversation history as input or output
+log = phospho.log(input=messages, output=messages)
+
+print("input:", log["input"])
+print("output:", log["output"])
+print("system_prompt:", log["system_prompt"]) # system prompt is automatically extracted
+
+
input: Explain quantum computers in less than 20 words.
+output: Qubits harness quantum physics for faster, more powerful computation.
+system_prompt: You are a helpful assistant.
+
+

Note that consecutive messages with the same role are concatenated with a newline.

+
messages = [
+    {"role": "system", "content": "You are a helpful assistant."},
+    {"role": "user", "content": "Explain quantum computers in less than 20 words."},
+    {"role": "user", "content": "What is the speed of light?"},
+]
+log = phospho.log(input=messages)
+
+
input: Explain quantum computers in less than 20 words.\nWhat is the speed of light?
+
+

If you need more control, consider using custom extractors.

+

Custom extractors

+

Pass custom extractors to phospho.log to extract the input and output from any object. The custom extractor is a function that is applied to the input or output before logging. The function should return a string.

+

The original object is converted to a dict (if jsonable) or a string, and stored in raw_input and raw_output.

+
phospho.log(
+    input={"custom_input": "this is a complex object"},
+    output={"custom_output": "which is not a string nor a standard object"},
+    # Custom extractors return a string
+    input_to_str_function=lambda x: x["custom_input"],
+    output_to_str_fucntion=lambda x: x["custom_output"],
+)
+
+
input: this is a complex object
+output: which is not a string nor a standard object
+
+

Log metadata

+

You can log additional data with each interaction (user id, version id,...) by passing arguments to phospho.log.

+
log = phospho.log(
+    input="log this",
+    output="and that",
+    # There is a metadata field
+    metadata={"always": "moooore"},
+    # Every extra keyword argument is logged as metadata
+    log_anything_and_everything="even this is ok",
+)
+
+

Log streaming outputs

+

phospho supports streamed outputs. This is useful when you want to log the output of a streaming API.

+

Example: OpenAI streaming

+

Out of the box, phospho supports streaming OpenAI completions. Pass stream=True to phospho.log to handle streaming responses.

+

When iterating over the response, phospho will automatically concatenate each chunk until the streaming is finished.

+
from openai.types.chat import ChatCompletionChunk
+from openai._streaming import Stream
+
+query = {
+    "messages": [{"role": "system", "content": "You are a helpful assistant."},
+                 {"role": "user", "content": "Explain quantum computers in less than 20 words."},
+    ],
+    "model": "gpt-4o-mini",
+    # Enable streaming on OpenAI
+    "stream": True
+}
+# OpenAI completion function return a Stream of chunks
+response: Stream[ChatCompletionChunk] = openai_client.chat.completions.create(**query)
+
+# Pass stream=True to phospho.log to handle this
+phospho.log(input=query, output=response, stream=True)
+
+

Example: Local Ollama streaming

+

Let's assume you're in a setup where you stream text from an API. The stream is a generator that yields chunks of the response. The generator is immutable by default.

+

To use this as an output in phospho.log, you need to:

+
    +
  1. Wrap the generator with phospho.MutableGenerator or phospho.MutableAsyncGenerator (for async generators)
  2. +
  3. Specify a stop function that returns True when the streaming is finished. This is used to trigger the logging of the task.
  4. +
+

Here is an example with an Ollama endpoint that streams responses.

+
r = requests.post(
+    # This is a local streaming Ollama endpoint
+    "http://localhost:11434/api/generate",
+    json={
+        "model": "mistral-7b",
+        "prompt": "Explain quantum computers in less than 20 words.",
+        "context": [],
+    },
+    # This connects to a streaming API endpoint
+    stream=True,
+)
+r.raise_for_status()
+response_iterator = r.iter_lines()
+
+# response_iterator is a generator that streams the response token by token
+# It is immutable by default
+# In order to directly log this to phospho, we need to wrap it this way
+response_iterator = phospho.MutableGenerator(
+    generator=response_iterator,
+    # Indicate when the streaming stops
+    stop=lambda line: json.loads(line).get("done", False),
+)
+
+# Log the generated content to phospho with Stream=True
+phospho.log(input=prompt, output=response_iterator, stream=True)
+
+# As you iterate over the response, phospho combines the chunks
+# When stop(output) is True, the iteration is completed and the task is logged
+for line in response_iterator:
+    print(line)
+
+

Wrap functions with phospho.wrap

+

If you wrap a function with phospho.wrap, phospho automatically logs a task when they are called:

+
    +
  • The passed arguments are logged as input
  • +
  • The returned value is logged as output
  • +
+

You can still use custom extractors and log metadata.

+

Use the @phospho.wrap decorator

+

If you want to log every call to a python function, you can use the @phospho.wrap decorator. This is a nice pythonic way to structure your LLM app's code.

+
@phospho.wrap
+def answer(messages: List[Dict[str, str]]) -> Optional[str]:
+    response = openai_client.chat.completions.create(
+        model="gpt-4o-mini",
+        messages=messages,
+    )
+    return response.choices[0].delta.content
+
+

How to log metadata with phospho.wrap?

+

Like phospho.log, every extra keyword argument is logged as metadata.

+
@phospho.wrap(metadata={"more": "details"})
+def answer(messages: List[Dict[str, str]]) -> Optional[str]:
+    response = openai_client.chat.completions.create(
+        model="gpt-4o-mini",
+        messages=messages,
+    )
+    return response.choices[0].delta.content
+
+

Wrap an imported function with phospho.wrap

+

If you can't change the function definition, you can wrap it this way:

+
# You can wrap any function call in phospho.wrap
+response = phospho.wrap(
+    openai_client.chat.completions.create,
+    # Pass additional metadata
+    metadata={"more": "details"},
+)(
+    messages=[
+        {"role": "system", "content": "You are a helpful assistant."},
+        {"role": "user", "content": "Explain quantum computers in less than 20 words."},
+    ],
+    model="gpt-4o-mini",
+)
+
+

If you want to wrap all calls to a function, override the function definition with the wrapped version:

+
openai_client.chat.completions.create = phospho.wrap(
+    openai_client.chat.completions.create
+)
+
+

Wrap a streaming function with phospho.wrap

+

phospho.wrap can handle streaming functions. To do that, you need two things:

+
    +
  1. Pass stream=True. This tells phospho to concatenate the string outputs.
  2. +
  3. Pass a stop function, such that stop(output) is True when the streaming is finished and trigger the logging of the task.
  4. +
+
@phospho.wrap(stream=True, stop=lambda token: token is None)
+def answer(messages: List[Dict[str, str]]) -> Generator[Optional[str], Any, None]:
+    streaming_response: Stream[
+        ChatCompletionChunk
+    ] = openai_client.chat.completions.create(
+        model="gpt-4o-mini",
+        messages=messages,
+        stream=True,
+    )
+    for response in streaming_response:
+        yield response.choices[0].delta.content
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/python/reference/index.html b/integrations/python/reference/index.html new file mode 100644 index 0000000..a46f39b --- /dev/null +++ b/integrations/python/reference/index.html @@ -0,0 +1,2325 @@ + + + + + + + + + + + + + + + + + + + + Python module reference - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Python module reference

+ +
+
    +
  • +

    Full Python module reference

    +
    +

    Click here to get the doc for every function of the Python module.

    +

    Read the docs

    +
  • +
  • +

    Source code

    +
    +

    Your contributions are welcome!

    +

    View on GitHub

    +
  • +
  • +

    Python module on PyPI

    +
    +

    pip install phospho

    +

    View on PyPI

    +
  • +
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/python/testing/index.html b/integrations/python/testing/index.html new file mode 100644 index 0000000..14b051c --- /dev/null +++ b/integrations/python/testing/index.html @@ -0,0 +1,2596 @@ + + + + + + + + + + + + + + + + + + + + + + + + Testing with Python - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + + +

Testing with Python

+ +

Evaluate your app's performance before deploying it to production.

+

The phospho testing framework allows you to test your app with historical data, custom datasets, and custom tests.

+

The phospho python module parallelizes the function calls to speed up the testing process.

+

Getting started

+

To get started, install the phospho python module.

+
pip install -U phospho
+
+

Create a new file phospho_testing.py:

+
import phospho
+
+phospho_test = phospho.PhosphoTest()
+
+

In this file, you can then write your tests.

+

Backtesting

+

To use data from the phospho platform, you can use the backtest source loader.

+
import phospho 
+
+phospho_test = phospho.PhosphoTest()
+
+@phospho_test.test(
+    source_loader="backtest",  # Load data from logged phospho data
+    source_loader_params={"sample_size": 3},
+)
+def test_backtest(message: phospho.lab.Message) -> str | None:
+    client = phospho.lab.get_sync_client("mistral")
+    response = client.chat.completions.create(
+        model="mistral-small",
+        messages=[
+            {"role": "system", "content": "You are an helpful assistant"},
+            {"role": message.role, "content": message.content},
+        ],
+    )
+    return response.choices[0].message.content
+
+

Dataset .CSV, .XLSX, .JSON

+

To test with a custom dataset, you can use the dataset source loader.

+
import phospho 
+
+phospho_test = phospho.PhosphoTest()
+
+@phospho_test.test(
+    source_loader="dataset", 
+    source_loader_params={"path": "path/to/dataset.csv"},
+)
+def test_backtest(column_a: str, column_b: str) -> str | None:
+    client = phospho.lab.get_sync_client("mistral")
+    response = client.chat.completions.create(
+        model="mistral-small",
+        messages=[
+            {"role": "system", "content": "You are an helpful assistant"},
+            {"role": "user", "content": column_a},
+        ],
+    )
+    return response.choices[0].message.content
+
+

Supported file formats: csv, xlsx, json

+
+

Info

+

The columns of the dataset file should match the function arguments.

+
+

Example of a local csv file:

+
column_a, column_b
+"What's larger, 3.9 or 3.11?", "3.11"
+
+

Custom tests

+

To write custom tests, you can just create a function and decorate it with @phospho_test.test().

+

At the end, add phospho.log to send the data to phospho for analysis.

+
import phospho
+
+phospho_test = phospho.PhosphoTest()
+
+@phospho_test.test()
+def test_simple():
+    client = phospho.lab.get_sync_client("mistral")
+    response = client.chat.completions.create(
+        model="mistral-small",
+        messages=[
+            {"role": "system", "content": "You are an helpful assistant"},
+            {"role": "user", "content": "What's bigger: 3.11 or 3.9?"},
+        ],
+    )
+    response_text = response.choices[0].message.content
+    # Use phospho.log to send the data to phospho for analysis
+    phospho.log(
+        input="What's bigger: 3.11 or 3.9?",
+        output=response_text,
+        # Specify the version_id of the test
+        version_id=phospho_test.version_id,
+    )
+
+

Run using python

+

To run the tests, use the run method of the PhosphoTest class.

+
phospho_test.run()
+
+

The executor_type can be either: +- parallel (default): parallelizes the backtest and dataset source loader calls. +- parallel_jobs: all functions are called in parallel. +- sequential: great for debugging.

+

Run using the phospho CLI

+

You can also use the phospho command line interface to run the tests. In the folder where phospho_testing.py is located, run:

+
phospho init # Run this only once
+phospho test
+
+

The executor type can be specified with the --executor-type flag.

+
phospho test --executor-type=parallel_jobs
+
+

Learn more using the --help flag:

+
phospho test --help
+
+
+
    +
  • +

    phospho CLI

    +
    +

    Learn how to install phospho command line interface

    +

    Read more

    +
  • +
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/integrations/supabase/index.html b/integrations/supabase/index.html new file mode 100644 index 0000000..e299d88 --- /dev/null +++ b/integrations/supabase/index.html @@ -0,0 +1,2936 @@ + + + + + + + + + + + + + + + + + + + + + + + + Log to phospho in a Supabase app with a webhook - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

Log to phospho in a Supabase app with a webhook

+ +

phospho is a platform that helps you build better chatbots by providing AI analytics about the user experience of your chatbot.

+

Supabase is an open-source database, authentication system, and hosting platform that allows you to quickly and easily build powerful web-based applications.

+

If you're using Supabase to build a chatbot, here's how you can log your chatbot messages to phospho using a Supabase Database webhook, a Supabase Edge Function, and the phospho API.

+

Prerequisites

+

We assume in this guide that you have already set up a Supabase project.

+
npm i supabase
+supabase init
+supabase login
+
+

We also assume that you have already created the chatbot UI using Supabase (here's a template).

+

Add the phospho API key and project id to your Supabase project

+

Create an account on phospho and get your API key and project id from the Settings.

+

Then, add the PHOSPHO_API_KEY and PHOSPHO_PROJECT_ID secrets to your Supabase project.

+

Option 1: In the CLI

+

Add the phospho API key and project id to your ./supabase/.env file:

+

```bash .env +PHOSPHO_API_KEY="..." +PHOSPHO_PROJECT_ID="..." +

Push those secrets to your Supabase project:
+
+```bash
+supabase secrets set --env-file ./supabase/.env
+

+

Option 2: In the console UI

+

Add directly the phospho API key and project id as Edge Functions Secrets in the Supabase console. Go to Settings/Edge Functions, and create the PHOSPHO_API_KEY and PHOSPHO_PROJECT_ID secrets.

+

Edge functions secrets

+

Setup your chat_history table

+

If you're using Supabase to build a chatbot, you probably already have a table that stores the chat history of your users. This table lets your users access their chat history on your app event after they close the website.

+

If you don't, you need to create a chat_history table.

+

Here's what your chat_history table should look like:

+ + + + + + + + + + + + + + + + + + + +
message_idchat_iduser_messageassistant_responsemetadata
c8902bda289bc8edaHiHello! How can I help you?{"model_name": "gpt-3.5"}
+

Here are the columns of the table:

+
    +
  • message_id (UUID), the unique id of the message.
  • +
  • chat_id (UUID), the unique id of the chat. All the messages from the same conversation should have the same chat_id.
  • +
  • user_message (TEXT), the message sent by the user.
  • +
  • assistant_response (TEXT), is the response displayed to the user. It can be the direct generation of an LLM, or the result of a multistep generation.
  • +
  • (Optional)metadata (JSON), a dictionary containing metadata about the message
  • +
+

Create the table

+

In Supabase, create a new table called chat_history with the columns described above. Customize the table to match your app behaviour.

+

Here's for example the SQL code to create the table with the columns described above:

+
create table
+  public.chat_history (
+    message_id uuid not null default gen_random_uuid (),
+    chat_id uuid not null default gen_random_uuid (),
+    user_message text not null,
+    assistant_response text null,
+    metadata json null,
+    constraint chat_history_pkey primary key (message_id)
+  ) tablespace pg_default;
+
+

Update the table

+

The table chat_history should be updated every time a new message is sent to your chatbot.

+

Example of how to insert a new row in the chat_history table with Supabase:

+
// The first time a user sends a message, let the chat_id be generated automatically
+const { firstMessage, error } = await supabase
+  .from('chat_history')
+  .insert({ 
+    user_message: userMessage, // The message sent by the user
+    assistant_response: assistantResponse, // The response displayed to the user, eg LLM generation
+    metadata: metadata // Optional Object
+}).select()
+
+// We get the chat_id of the first message
+const chat_id = firstMessage.chat_id
+
+// The next time the user sends a message, we use the same chat_id
+// This groups all the messages from the same conversation
+const { error } = await supabase
+  .from('chat_history')
+  .insert({ 
+    chat_id: chat_id, 
+    user_message: userMessage, 
+    assistant_response: assistantResponse, 
+    metadata: metadata
+}).select()
+
+

Setup the Supabase Edge Function

+

Let's create a Supabase Edge Function that will log the chat message to phospho using the phospho API. Later, we will trigger this function with a Supabase Database webhook.

+

Create the Edge Function

+

Create a new Edge Function called phospho-logging inside your project:

+
supabase functions new phospho-logging
+
+

This creates a function stub in your supabase folder:

+
โ””โ”€โ”€ supabase
+    โ”œโ”€โ”€ functions
+    โ”‚   โ””โ”€โ”€ phospho-logging
+    โ”‚   โ”‚   โ””โ”€โ”€ index.ts ## Your function code
+    โ””โ”€โ”€ config.toml
+
+

Write the code to call the phospho API

+

In the newly created index.ts file, we add a basic code that:

+
    +
  1. Gets the phospho API key and project id from the environment variables.
  2. +
  3. Converts the payload sent by Supabase to the format expected by the phospho API.
  4. +
  5. Sends the payload to the phospho API.
  6. +
+

Here's an example of what the code could look like:

+

``javascript supabase/functions/phospho-logging/index.ts +// Get the phospho API key and project id from the environment variable +const phosphoApiKey = Deno.env.get("PHOSPHO_API_KEY"); +const phosphoProjectId = Deno.env.get("PHOSPHO_PROJECT_ID"); +const phosphoUrl =https://api.phospho.ai/v2/log/${phosphoProjectId}`;

+

// This interface describes the payload sent by Supabase to the Edge Function +// Change this to match your chat_history table +interface ChatHistoryPayload { + type: "INSERT" | "UPDATE" | "DELETE"; + table: string; + record: { + message_id: string; + chat_id: string; + user_message: string; + assistant_response: string; + metadata: { + model_name: string; + }; + }; +}

+

Deno.serve( + async (req: { + json: () => ChatHistoryPayload | PromiseLike; + }) => { + if (!phosphoApiKey) { + throw new Error("Missing phospho API key"); + } + if (!phosphoProjectId) { + throw new Error("Missing phospho project id"); + }

+
const payload: ChatHistoryPayload = await req.json();
+
+// Here, we react to the INSERT and UPDATE events on the chat_history table
+// Change this to match your chat_history table
+if (payload.record.user_message && (payload.type === "UPDATE" || payload.type === "INSERT")) {
+    // Here, we convert the payload to the format expected by the phospho API
+    // Change this to match your chat_history table
+    const phosphoPayload = {
+    batched_log_events: [
+        {
+            // Here's how to map the payload to the phospho API
+            task_id: payload.record.message_id,
+            session_id: payload.record.chat_id,
+            input: payload.record.user_message,
+            output: payload.record.assistant_response,
+        },
+    ],
+    };
+
+    // Send the payload to the phospho API
+    const response = await fetch(phosphoUrl, {
+    method: "POST",
+    headers: {
+        Authorization: `Bearer ${phosphoApiKey}`,
+        "Content-Type": "application/json",
+    },
+    body: JSON.stringify(phosphoPayload),
+    });
+
+    if (!response.ok) {
+    throw new Error(
+        `Error sending chat data to Phospho: ${response.statusText}`
+    );
+    }
+
+    return new Response(null, { status: 200 });
+}
+
+return new Response("No new chat message detected", { status: 200 });
+
+

} +); +`` + +Feel free to change the code to adapt it to yourchat_history` table and to how you chat messages are stored. +

+

Deploy the Edge Function

+

Deploy the function to your Supabase project: + +bash +supabase functions deploy phospho-logging --project-ref your_supabase_project_ref

+

Your Supabase project ref which can be found in your console url: https://supabase.com/dashboard/project/project-ref

+

Setup the Supabase Webhook

+

Now that you have created the Supabase Edge Function, create a Supabase Database webhook to trigger it.

+

Create the webhook

+

In the Supabase console, go to Database/Webhook.

+

Webhooks

+

Click on Create new in the top right. Make the webhook trigger on the chat_history table, and on the INSERT and UPDATE events.

+

Webhooks again

+

Call the Edge Function with authentication

+

In the webhook configuration, select the type of webhook "Supabase Edge Function" and select the phospho-logging you just deployed.

+

In the HTTP Headers section, add an Authorization header with the value Bearer ${SUPABSE_PROJECT_ANON_PUBLIC_KEY}. Find your anon public key in the console, in the tabs Settings/API/Project API keys.

+

+

Test the webhook

+

To test the webhook, insert a row in the chat_history table, and the webhook should be triggered. You'll see the logs in the phospho dashboard.

+

You can also send a message to your chatbot. This will now trigger the webhook and log the message to phospho.

+

Next steps

+

You're done! Your are now logging the chatbot messages to phospho and can learn how the users interact with your chatbot using the phospho dashboard and AI analytics.

+

Learn more about phospho features by reading the guides:

+
+
    +
  • +

    Log user feedback

    +
    +

    Log user feedback to phospho to improve the phospho evaluation

    +

    Read more

    +
  • +
  • +

    Run AB Tests

    +
    +

    Try different versions of your chatbot and compare outcomes on phospho

    +

    Read more

    +
  • +
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/local/custom-job/index.html b/local/custom-job/index.html new file mode 100644 index 0000000..7217267 --- /dev/null +++ b/local/custom-job/index.html @@ -0,0 +1,2486 @@ + + + + + + + + + + + + + + + + + + + + + + + + Create Custom Jobs - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Create Custom Jobs

+ +

phospho comes with several built-in jobs that you can use to process your messages: zero-shot evaluation, classification based evaluation, event detection...

+

But you can also create your own jobs and run them on your messages. This is what we call a custom job.

+

Creating a custom job function

+

To create a custom job function, you need to create a function that:

+
    +
  • takes a lab.Message as input
  • +
  • can take additional parameters if needed (they will be passed as JobConfig)
  • +
  • returns a lab.JobResult. + The lab.JobResult should contain the result of the job function and the type of the result.
  • +
+

For instance, to define a simple job that checks if a message contains a forbidden word, you can create a Job function like this:

+
from phospho import lab
+from typing import List
+import re
+
+def my_custom_job(message: lab.Message, forbidden_words: List) -> lab.JobResult:
+    """
+    For each each message, me will check if the forbidden words are present in the message.
+    The function will return a JobResult with a boolean value
+    (True if one of the words is present, False otherwise).
+    """
+
+    pattern = r'\b(' + '|'.join(re.escape(word) for word in forbidden_words) + r')\b'
+
+    # Use re.search() to check if any of the words are in the text
+    if re.search(pattern, message.content):
+        result = True
+    else:
+        result = False
+
+    return lab.JobResult(
+        job_id="my_custom_job",
+        result_type=lab.ResultType.bool,
+        value=result,
+    )
+
+

Running a custom job

+

Once you have defined your custom job function, you can create a Job in your workload that will run this job function on your messages.

+

You need to pass the function in the job_function of the lab.Job object.

+

In our example:

+
# Create a workload in our lab
+workload = lab.Workload()
+
+# Add our job to the workload
+workload.add_job(
+    lab.Job(
+        id="regex_check",
+        job_function=my_custom_job, # We add our custom job function here
+        config=lab.JobConfig(
+            forbidden_words=["cat", "dog"]
+        ),
+    )
+)
+
+

This workload can then be run on your messages using the async_run method.

+
await workload.async_run(
+    messages=[
+        # No forbiden word is present.
+        lab.Message(
+            id="message_1",
+            content="I like elephants.",
+        ),
+        # One forbiden word is present.
+        lab.Message(
+            id="message_2",
+            content="I love my cat.",
+        )
+    ]
+)
+
+# Let's see the results
+for i in range(1, 3):
+    print(
+        f"In message {i}, a forbidden word was detected: {workload.results['message_'+str(i)]['regex_check'].value}"
+    )
+
+# In message 1, a forbidden word was detected: False
+# In message 2, a forbidden word was detected: True
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/local/llm-provider/index.html b/local/llm-provider/index.html new file mode 100644 index 0000000..2ba2252 --- /dev/null +++ b/local/llm-provider/index.html @@ -0,0 +1,2335 @@ + + + + + + + + + + + + + + + + + + + + + + + + Using a custom LLM provider - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Using a custom LLM provider

+ +

phospho preview can be ran using any OpenAI compatible LLM provider. The most common ones include:

+
    +
  • Mistral AI (https://mistral.ai/)
  • +
  • Ollama (https://ollama.com/)
  • +
  • vLLM (https://docs.vllm.ai/)
  • +
  • and many others
  • +
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/local/optimize/index.html b/local/optimize/index.html new file mode 100644 index 0000000..04abc44 --- /dev/null +++ b/local/optimize/index.html @@ -0,0 +1,2682 @@ + + + + + + + + + + + + + + + + + + + + + + + + Optimize Jobs - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + +

In this guide, we will use the labfrom the phosphopackage to run an event extraction task on a dataset. +First, we will run on a subset of the dataset with several models:

+
    +
  • the OpenAI API
  • +
  • the Mistral AI API
  • +
  • a local Ollama model
  • +
+

Then, we will use the lab optimizer to find the best model and hyperparameters for the task in term of performance, speed and price.

+

Finally, we will use the lab to run the best model on the full dataset and compare the results with the subset.

+

Feel free to only use the APIs or Ollama models you want.

+

Installation and setup

+

You will need:

+
    +
  • an OpenAI API key (find yours here)
  • +
  • a Mistral AI API key (find yours here)
  • +
  • Ollama running on your local machine, with the Mistral 7B model installed. You can find the installation instructions for Ollama here
  • +
+
pip install --upgrade phospho
+
+

(Optional) Install Ollama

+

If you want to use Ollama, install the Ollama app on your desktop, launch it, and install the python package to interact with it:

+
pip install ollama
+
+

Test your installation by running the following script:

+
import ollama
+
+try:
+  # Let's check we can reach your local Ollama API
+  response = ollama.chat(model='mistral', messages=[
+    {
+      'role': 'user',
+      'content': 'What is the best French cheese? Keep your answer short.',
+    },
+  ])
+  print(response['message']['content'])
+except Exception as e:
+  print(f"Error: {e}")
+  print("You need to have a local Ollama server running to continue and the mistral model downloaded. \nRemove references to Ollama otherwise.")
+
+

Define the phospho workload and jobs

+
from phospho import lab
+from typing import Literal
+
+# Create a workload in our lab
+workload = lab.Workload()
+
+# Setup the configs for our job
+# Model are ordered from the least desired to the most desired
+class EventConfig(lab.JobConfig):
+    event_name: str
+    event_description: str
+    model_id: Literal["openai:gpt-4", "mistral:mistral-large-latest", "mistral:mistral-small-latest", "ollama:mistral-7B"] = "openai:gpt-4"
+
+# Add our job to the workload
+workload.add_job(
+    lab.Job(
+        name="sync_event_detection",
+        id="question_answering",
+        config=EventConfig(
+            event_name="Question Answering",
+            event_description="User asks a question to the assistant",
+            model_id="openai:gpt-4"
+        )
+    )
+)
+
+

Loading a message dataset

+

Let's load a dataset of messages from huggingface, so we can run our extraction job on it.

+
pip install datasets
+
+
from datasets import load_dataset
+
+dataset = load_dataset("daily_dialog")
+
+# Generate a sub dataset with 30 messages
+sub_dataset = dataset["train"].select(range(30))
+
+# Let's print one of the messages
+print(sub_dataset[0]["dialog"][0])
+
+# Build the message list for our lab
+messages = []
+for row in sub_dataset:
+    text = row["dialog"][0]
+    messages.append(lab.Message(content=text))
+
+# Run the lab on it
+# The job will be run with the default model (openai:gpt-3.5-turbo)
+workload_results = await workload.async_run(messages=messages, executor_type="parallel")
+
+# Compute alternative results with the Mistral API and Ollama
+await workload.async_run_on_alternative_configurations(messages=messages, executor_type="parallel")
+
+

Apply the optimizer to the pipeline

+

For the purpose of this demo, we consider a considertion good enough if it matches gpt-4 on at least 80% of the dataset. Good old Paretto.

+

You can check the current configuration of the workload with:

+
workload.jobs[0].config.model_id
+
+

To run the optimizer, just run the following:

+
workload.optimize_jobs(accuracy_threshold=0.8)
+
+# let's check the new model_id (if it has changed)
+workload.jobs[0].config.model_id
+
+

For us, mistral:mistral-small-latest was selected.

+

Run our workload on the full dataset, with optimized parameters

+

We can now run the workload on the full dataset, with the optimized model.

+
sub_dataset = dataset["train"] # Here you can limit the dataset to a subset if you want to test faster and cheaper
+
+# Build the message list for our lab
+messages = []
+for row in sub_dataset:
+    text = row["dialog"][0]
+    messages.append(lab.Message(content=text))
+
+# The job will be run with the best model (mistral:mistral-small-latest in our case)
+workload_results = await workload.async_run(messages=messages, executor_type="parallel")
+
+

Analyze the results

+
boolean_result = []
+
+# Go through the dict
+for key, value in workload_results.items():
+    result = value['question_answering'].value
+    boolean_result.append(result)
+
+# Let's count the number of True and False
+true_count = boolean_result.count(True)
+false_count = boolean_result.count(False)
+
+print(f"In the dataset, {true_count/len(boolean_result)*100}% of the messages are a question. The rest are not.")
+
+

In our case:

+
In the dataset, 44.5% of the messages are a question. The rest are not.
+
+

Going further

+

You can use the lab to run other tasks, such as:

+
    +
  • Named Entity Recognition
  • +
  • Sentiment Analysis
  • +
  • Evaluations
  • +
  • And more!
  • +
+

You can also play around with differnet models, different hyperparameters, and different datasets.

+

You want to have such analysis on your own LLM app, in real time? Check out the cloud hosted version of phospho, available on phospho.ai

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/local/quickstart/index.html b/local/quickstart/index.html new file mode 100644 index 0000000..b8d1ee2 --- /dev/null +++ b/local/quickstart/index.html @@ -0,0 +1,2538 @@ + + + + + + + + + + + + + + + + + + + + + + + + Quickstart - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Quickstart

+ +

Get started with phospho lab, the core of phospho. This is what the hosted version of phospho leverages to deliver insights.

+

phospho diagram

+
+

Note

+

Looking to setup logging to the phospho hosted version? Read this guide instead.

+
+

The phospho lab is a tool that allows you to run evaluations and detect events in your messages.

+
    +
  1. Define custom workloads and jobs
  2. +
  3. Run them on your messages in parallel
  4. +
  5. Optimize your models and configurations
  6. +
+

Installation

+

Install the phospho package with the lab extra:

+
pip install "phospho[lab]"
+
+

You need to set your OPENAI_API_KEY as an environment variable.

+
export OPENAI_API_KEY=your_openai_api_key
+
+

If you don't want to use OpenAI, you can setup Ollama and set the following environment variables:

+
export OVERRIDE_WITH_OLLAMA_MODEL=mistral
+
+

This will replace all calls to OpenAI models with calls to the mistral model running with Ollama. Make sure you've downloaded Item.

+

Create a workload

+

The phospho lab lets you run extractions on your messages.

+

Start by creating a workload. A workload is a set of jobs that you want to run on your messages.

+
from phospho import lab
+
+# Create the phospho workload
+workload = lab.Workload()
+
+

Define jobs

+

Define jobs and add them to the workload. For example, let's add an event detection job. Those are the jobs you can setup in phospho cloud.

+
# Define the job configurations
+class EventConfig(lab.JobConfig):
+    event_name: str
+    event_description: str
+
+# Let's add an event detection task to our workload
+workload.add_job(
+            lab.Job(
+                id="question_answering",
+                job_function=lab.job_library.event_detection,
+                config=EventConfig(
+                    event_name="question_answering",
+                    event_description="The user asks a question to the assistant",
+                ),
+            )
+        )
+
+

Run the workload

+

Now, you can run the workload on your messages.

+

Messages are a basic abstraction. They can be user messages or LLM outputs. They can contain metadata or additional information. It's up to the jobs to decide what to do with them.

+
# Let's add some messages to analyze
+message = lab.Message(
+                    id="my_message_id",
+                    role="User",
+                    content="What is the weather today in Paris?",
+                )
+
+# Run the workload on the message
+# Note that this is an async function. Use asyncio.run to run it in a script.
+await workload.async_run(
+            messages=[message],
+            executor_type="sequential",
+        )
+
+

Gather results

+

Results are stored in the workload.

+
# Check the results of the workload
+message_results = workload.results["my_message_id"]
+
+print(f"Result of the event detection: {message_results['question_answering'].value}")
+
+

You can also get them in a pandas dataframe.

+
workload.results_df()
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/mintlify/.gitignore b/mintlify/.gitignore deleted file mode 100644 index 48c318e..0000000 --- a/mintlify/.gitignore +++ /dev/null @@ -1,5 +0,0 @@ -.DS_Store -package-lock.json -node_modules - -.env \ No newline at end of file diff --git a/mintlify/README.md b/mintlify/README.md deleted file mode 100644 index b17a41e..0000000 --- a/mintlify/README.md +++ /dev/null @@ -1,30 +0,0 @@ -# ๐Ÿงช phospho documentation - -This is the user-facing documentation of the [phospho platform](https://platform.phospho.ai) - -- The deployed docs [are available here.](https://docs.phospho.ai/welcome) -- The open source code of the platform [is available here.](https://github.com/phospho-app/phospho) - -## Local development - -The docs use Mintlify for deployment. Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command - -Use node version 18. Install with nvm. - -``` -npm i -g mintlify -``` - -Run the following command at the root of your documentation (where mint.json is) - -``` -mintlify dev -``` - -To learn how to format the pages and what blocks you can use, [check out the Mintlify docs.](https://mintlify.com/docs/so-100/quickstart) - -### Troubleshooting - -- `mintlify dev` doesn't run - try `npx mintlify dev` -- Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies. -- Page loads as a 404 - Make sure you are running in a folder with `mint.json` diff --git a/mintlify/ai-training/ai-control-start.mdx b/mintlify/ai-training/ai-control-start.mdx deleted file mode 100644 index d19ed92..0000000 --- a/mintlify/ai-training/ai-control-start.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /ai-control/start ---- \ No newline at end of file diff --git a/mintlify/ai-training/ai-control-stop.mdx b/mintlify/ai-training/ai-control-stop.mdx deleted file mode 100644 index 4669cad..0000000 --- a/mintlify/ai-training/ai-control-stop.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /ai-control/stop ---- \ No newline at end of file diff --git a/mintlify/ai-training/cancel-training.mdx b/mintlify/ai-training/cancel-training.mdx deleted file mode 100644 index abc9d33..0000000 --- a/mintlify/ai-training/cancel-training.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /training/cancel ---- diff --git a/mintlify/ai-training/start-training.mdx b/mintlify/ai-training/start-training.mdx deleted file mode 100644 index 04c9b07..0000000 --- a/mintlify/ai-training/start-training.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /training/start ---- diff --git a/mintlify/assets/Calibration-position-1.jpg b/mintlify/assets/Calibration-position-1.jpg deleted file mode 100644 index 3bcaa68..0000000 Binary files a/mintlify/assets/Calibration-position-1.jpg and /dev/null differ diff --git a/mintlify/assets/Calibration-position-2.jpg b/mintlify/assets/Calibration-position-2.jpg deleted file mode 100644 index 24211a2..0000000 Binary files a/mintlify/assets/Calibration-position-2.jpg and /dev/null differ diff --git a/mintlify/assets/Calibration.jpg b/mintlify/assets/Calibration.jpg deleted file mode 100644 index b493d54..0000000 Binary files a/mintlify/assets/Calibration.jpg and /dev/null differ diff --git a/mintlify/assets/Rpi-OS-customisation.jpg b/mintlify/assets/Rpi-OS-customisation.jpg deleted file mode 100644 index a803474..0000000 Binary files a/mintlify/assets/Rpi-OS-customisation.jpg and /dev/null differ diff --git a/mintlify/assets/Rpi-SSH-enabled.jpg b/mintlify/assets/Rpi-SSH-enabled.jpg deleted file mode 100644 index a96fc77..0000000 Binary files a/mintlify/assets/Rpi-SSH-enabled.jpg and /dev/null differ diff --git a/mintlify/assets/Rpi-general-settings.jpg b/mintlify/assets/Rpi-general-settings.jpg deleted file mode 100644 index 51ce9bb..0000000 Binary files a/mintlify/assets/Rpi-general-settings.jpg and /dev/null differ diff --git a/mintlify/assets/Rpi-imager.jpg b/mintlify/assets/Rpi-imager.jpg deleted file mode 100644 index 01db189..0000000 Binary files a/mintlify/assets/Rpi-imager.jpg and /dev/null differ diff --git a/mintlify/assets/admin-settings-huggingface.png b/mintlify/assets/admin-settings-huggingface.png deleted file mode 100644 index ede149c..0000000 Binary files a/mintlify/assets/admin-settings-huggingface.png and /dev/null differ diff --git a/mintlify/assets/controll_schema.png b/mintlify/assets/controll_schema.png deleted file mode 100644 index 230e631..0000000 Binary files a/mintlify/assets/controll_schema.png and /dev/null differ diff --git a/mintlify/assets/create-hf-token.png b/mintlify/assets/create-hf-token.png deleted file mode 100644 index eb91a33..0000000 Binary files a/mintlify/assets/create-hf-token.png and /dev/null differ diff --git a/mintlify/assets/depth_camera.png b/mintlify/assets/depth_camera.png deleted file mode 100644 index aa50750..0000000 Binary files a/mintlify/assets/depth_camera.png and /dev/null differ diff --git a/mintlify/assets/depth_image.png b/mintlify/assets/depth_image.png deleted file mode 100644 index faf6f03..0000000 Binary files a/mintlify/assets/depth_image.png and /dev/null differ diff --git a/mintlify/assets/gazebo.gif b/mintlify/assets/gazebo.gif deleted file mode 100644 index 0956282..0000000 Binary files a/mintlify/assets/gazebo.gif and /dev/null differ diff --git a/mintlify/assets/genesis.webp b/mintlify/assets/genesis.webp deleted file mode 100644 index 602830a..0000000 Binary files a/mintlify/assets/genesis.webp and /dev/null differ diff --git a/mintlify/assets/junior.jpg b/mintlify/assets/junior.jpg deleted file mode 100644 index 2c782a9..0000000 Binary files a/mintlify/assets/junior.jpg and /dev/null differ diff --git a/mintlify/assets/lerobot_dataset_visualizer.png b/mintlify/assets/lerobot_dataset_visualizer.png deleted file mode 100644 index 4ceade3..0000000 Binary files a/mintlify/assets/lerobot_dataset_visualizer.png and /dev/null differ diff --git a/mintlify/assets/lerobot_dataset_viz.png b/mintlify/assets/lerobot_dataset_viz.png deleted file mode 100644 index cc7c800..0000000 Binary files a/mintlify/assets/lerobot_dataset_viz.png and /dev/null differ diff --git a/mintlify/assets/meta-quest-server-list.png b/mintlify/assets/meta-quest-server-list.png deleted file mode 100644 index 3f600fb..0000000 Binary files a/mintlify/assets/meta-quest-server-list.png and /dev/null differ diff --git a/mintlify/assets/mujoco.png b/mintlify/assets/mujoco.png deleted file mode 100644 index f24877a..0000000 Binary files a/mintlify/assets/mujoco.png and /dev/null differ diff --git a/mintlify/assets/names_buttons.jpg b/mintlify/assets/names_buttons.jpg deleted file mode 100644 index e14027d..0000000 Binary files a/mintlify/assets/names_buttons.jpg and /dev/null differ diff --git a/mintlify/assets/nvidia_isaac.gif b/mintlify/assets/nvidia_isaac.gif deleted file mode 100644 index 203fcd5..0000000 Binary files a/mintlify/assets/nvidia_isaac.gif and /dev/null differ diff --git a/mintlify/assets/nvidia_isaac_small.gif b/mintlify/assets/nvidia_isaac_small.gif deleted file mode 100644 index 6144928..0000000 Binary files a/mintlify/assets/nvidia_isaac_small.gif and /dev/null differ diff --git a/mintlify/assets/packshot-dk1.jpg b/mintlify/assets/packshot-dk1.jpg deleted file mode 100644 index 228b6ee..0000000 Binary files a/mintlify/assets/packshot-dk1.jpg and /dev/null differ diff --git a/mintlify/assets/packshot-dk2.jpg b/mintlify/assets/packshot-dk2.jpg deleted file mode 100644 index 546cb78..0000000 Binary files a/mintlify/assets/packshot-dk2.jpg and /dev/null differ diff --git a/mintlify/assets/pdk1_plugged.jpg b/mintlify/assets/pdk1_plugged.jpg deleted file mode 100644 index f57784a..0000000 Binary files a/mintlify/assets/pdk1_plugged.jpg and /dev/null differ diff --git a/mintlify/assets/phosphobot-ai-control.png b/mintlify/assets/phosphobot-ai-control.png deleted file mode 100644 index ddf9169..0000000 Binary files a/mintlify/assets/phosphobot-ai-control.png and /dev/null differ diff --git a/mintlify/assets/phosphobot-aitraining.png b/mintlify/assets/phosphobot-aitraining.png deleted file mode 100644 index 720d181..0000000 Binary files a/mintlify/assets/phosphobot-aitraining.png and /dev/null differ diff --git a/mintlify/assets/phosphobot-dashboard.png b/mintlify/assets/phosphobot-dashboard.png deleted file mode 100644 index 3a955df..0000000 Binary files a/mintlify/assets/phosphobot-dashboard.png and /dev/null differ diff --git a/mintlify/assets/policies-act.png b/mintlify/assets/policies-act.png deleted file mode 100644 index 786bb03..0000000 Binary files a/mintlify/assets/policies-act.png and /dev/null differ diff --git a/mintlify/assets/policies-autort.png b/mintlify/assets/policies-autort.png deleted file mode 100644 index 15f4f4f..0000000 Binary files a/mintlify/assets/policies-autort.png and /dev/null differ diff --git a/mintlify/assets/policies-gr00t.png b/mintlify/assets/policies-gr00t.png deleted file mode 100644 index 1723aa0..0000000 Binary files a/mintlify/assets/policies-gr00t.png and /dev/null differ diff --git a/mintlify/assets/policies-openvla.png b/mintlify/assets/policies-openvla.png deleted file mode 100644 index 7838142..0000000 Binary files a/mintlify/assets/policies-openvla.png and /dev/null differ diff --git a/mintlify/assets/policies-pi0-fast.png b/mintlify/assets/policies-pi0-fast.png deleted file mode 100644 index 9e7b674..0000000 Binary files a/mintlify/assets/policies-pi0-fast.png and /dev/null differ diff --git a/mintlify/assets/policies-pi0.5.png b/mintlify/assets/policies-pi0.5.png deleted file mode 100644 index 4869661..0000000 Binary files a/mintlify/assets/policies-pi0.5.png and /dev/null differ diff --git a/mintlify/assets/policies-pi0.png b/mintlify/assets/policies-pi0.png deleted file mode 100644 index 632960d..0000000 Binary files a/mintlify/assets/policies-pi0.png and /dev/null differ diff --git a/mintlify/assets/policies-rdt.png b/mintlify/assets/policies-rdt.png deleted file mode 100644 index 5edf56d..0000000 Binary files a/mintlify/assets/policies-rdt.png and /dev/null differ diff --git a/mintlify/assets/policies-rt2.png b/mintlify/assets/policies-rt2.png deleted file mode 100644 index 76d477c..0000000 Binary files a/mintlify/assets/policies-rt2.png and /dev/null differ diff --git a/mintlify/assets/policies-smolvla.png b/mintlify/assets/policies-smolvla.png deleted file mode 100644 index 756ffd7..0000000 Binary files a/mintlify/assets/policies-smolvla.png and /dev/null differ diff --git a/mintlify/assets/pybullet.png b/mintlify/assets/pybullet.png deleted file mode 100644 index 122d23a..0000000 Binary files a/mintlify/assets/pybullet.png and /dev/null differ diff --git a/mintlify/assets/recording_parameters.png b/mintlify/assets/recording_parameters.png deleted file mode 100644 index cda7903..0000000 Binary files a/mintlify/assets/recording_parameters.png and /dev/null differ diff --git a/mintlify/assets/rpi-1.png b/mintlify/assets/rpi-1.png deleted file mode 100644 index cc7f5d7..0000000 Binary files a/mintlify/assets/rpi-1.png and /dev/null differ diff --git a/mintlify/assets/rpi-2.png b/mintlify/assets/rpi-2.png deleted file mode 100644 index dbd0a0b..0000000 Binary files a/mintlify/assets/rpi-2.png and /dev/null differ diff --git a/mintlify/assets/rpi-3.png b/mintlify/assets/rpi-3.png deleted file mode 100644 index 52ee85c..0000000 Binary files a/mintlify/assets/rpi-3.png and /dev/null differ diff --git a/mintlify/assets/rpi-4.png b/mintlify/assets/rpi-4.png deleted file mode 100644 index b8f7dc1..0000000 Binary files a/mintlify/assets/rpi-4.png and /dev/null differ diff --git a/mintlify/assets/so100clamps.jpg b/mintlify/assets/so100clamps.jpg deleted file mode 100644 index 071fa9f..0000000 Binary files a/mintlify/assets/so100clamps.jpg and /dev/null differ diff --git a/mintlify/assets/stereo_cam.png b/mintlify/assets/stereo_cam.png deleted file mode 100644 index 2750945..0000000 Binary files a/mintlify/assets/stereo_cam.png and /dev/null differ diff --git a/mintlify/assets/stereo_cam_example.jpg b/mintlify/assets/stereo_cam_example.jpg deleted file mode 100644 index d8d1872..0000000 Binary files a/mintlify/assets/stereo_cam_example.jpg and /dev/null differ diff --git a/mintlify/assets/training-pi0.5.png b/mintlify/assets/training-pi0.5.png deleted file mode 100644 index cddae48..0000000 Binary files a/mintlify/assets/training-pi0.5.png and /dev/null differ diff --git a/mintlify/assets/wrist-camera.jpg b/mintlify/assets/wrist-camera.jpg deleted file mode 100644 index 87778c1..0000000 Binary files a/mintlify/assets/wrist-camera.jpg and /dev/null differ diff --git a/mintlify/basic-usage/dataset-operations.mdx b/mintlify/basic-usage/dataset-operations.mdx deleted file mode 100644 index 0b20dd4..0000000 --- a/mintlify/basic-usage/dataset-operations.mdx +++ /dev/null @@ -1,80 +0,0 @@ ---- -title: "Manipulate LeRobot datasets" -description: "How to repair, merge, split and delete LeRobot datasets" ---- - -import InstallCode from '/snippets/install-code.mdx'; - -You just recorded a LeRobot dataset with your robot. Maybe you also downloaded a dataset from the HuggingFace hub. With phospho, you can: - -- repair corrupted datasets -- merge two datasets into one -- split a dataset into multiple datasets (e.g. training/validation/test sets) -- delete episodes from a dataset - - - - -## Prerequisites - -You need to install the phosphobot software on your computer. If you haven't done it yet, follow the [installation guide](/installation). - - - - -For any of this operations, go to the dashboard and click on the `Browse Datasets` tab. Then move to the `lerobot_v2.1` folder. - -You will see all your local datasets. To download a dataset from the HuggingFace hub, click on the `Add from hub` button. It will be downloaded and added to your local datasets. - -## Repair a dataset - -Select a dataset and click on the `Repair Selected Dataset` button. This will check that your dataset is valid and fix common LeRobot issues. - -## Merge two datasets - -Select two datasets and click on the `Merge Selected Datasets` button. This will merge the two datasets into a single dataset. -For now, you can only merge two local datasets at a time. If you need to merge more, you can do it recursively. - -## Split a dataset - -Select a dataset and click on the `Split Selected Dataset` button. This will split the dataset into two datasets. - -## Delete a dataset - -Select a dataset and click on the `Delete Selected Dataset` button. This will delete the dataset from your local datasets. - -## Upload the dataset back to HuggingFace - -Click the 3 dots on the right of the dataset and select `Push to Hugging Face Hub`. This will upload the dataset to your HuggingFace account. - -## Visualize your dataset - -Once your dataset is uploaded to HuggingFace, you can view it using the [LeRobot Dataset Visualizer](https://huggingface.co/spaces/lerobot/visualize_dataset). This will also check that your dataset is valid. - -![LeRobot dataset visualizer](/assets/lerobot_dataset_viz.png) - - - The dataset visualizer only works with the `AVC1` video codec. If you used - another codec, you may see black screens in the video preview. Preview - directly the videos files in a video player by opening your recording locally: - `~/phosphobot/recordings/lerobot_v2/DATASET_NAME/video`. - - -Looking good? You're ready to train your AI model! - -# What's next - - - How to train an AI model from a dataset you recorded - diff --git a/mintlify/basic-usage/dataset-recording.mdx b/mintlify/basic-usage/dataset-recording.mdx deleted file mode 100644 index d8e403f..0000000 --- a/mintlify/basic-usage/dataset-recording.mdx +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: "Record a robotics dataset" -description: "How to record a robotics dataset with your robot?" ---- - -import InstallCode from '/snippets/install-code.mdx'; -import GetMQApp from '/snippets/get-mq-app.mdx'; -import TeleopInstructions from '/snippets/teleop-instructions.mdx'; - - -The easiest way to record datasets is to use the phospho Meta Quest app to control your robot arm. - -Recorded datasets are saved in the `lerobot_v2` format from **[LeRobot](https://huggingface.co/lerobot)** and uploaded to your HuggingFace account. - - - -Alternatively, you can implement your own dataset recording logic on top of the phospshobot API. Use the [Start Recording Episode](/recording/start-recording-episode) and [Stop Recording Episode](/recording/stop-recording-episode) endpoints to start and stop recording episodes. You can also read the [joints positions](/control/read-joints). - -## Prerequisites - -1. You need a robot arm such as the SO-100, the SO-101, or [other compatible hardware](https://github.com/phospho-app/phosphobot). Get the [phosphot starter pack here](https://robots.phospho.ai). -2. Install [the phosphobot software](/installation) - - - -3. Connect your cameras to the computer. Start the phosphobot server. - - ```bash - phosphobot run - ``` -4. Complete the [quickstart](/so-100/quickstart) and check that you can [control your robot](/basic-usage/teleop). -5. You have the **[phosphobot teleoperation app](/examples/teleop)** is installed on your **Meta Quest 2, Pro, 3 or 3s** - - - - -# 1. Set up your Hugging Face token - -To sync datasets, you need a Hugging Face token with write access. Follow these steps to generate one: - -1. Log in to your Hugging Face account. You can create [one here for free](https://huggingface.co) -2. Go to **Profile** and click **Access Tokens** in the sidebar. -3. Select the **Write** option to grant write access to your account. This is necessary for creating new datasets and uploading files. Name your token and click **Create token**. - -4. **Copy the token** and **save it** in a secure place. You will need it later. - -5. Make sure the phosphobot server is running. Open a browser and access `localhost` or `phosphobot.local` if you're using the control module. Then go to the Admin Configuration. - -6. **Paste the Hugging Face token**, and **save it**. - -![Paste your huggingface token here](/assets/admin-settings-huggingface.png) - -## 2. Set your dataset name and parameters - -Go to the _Admin Configuration_ page of your phospshobot dashboard. You can adjust settings. The most important are: - -- **Dataset Name**: The name of the dataset you want to record. -- **Task**: A text description of the task you're about to record. For example: _"Pick up the lego brick and put it in the box"_. This helps you remember what you recorded and is used by some AI models to understand the task. -- **Camera**: The cameras you want to record. By default, all cameras are recorded. You can select the cameras to record in the Admin Configuration. -- **Video Codec**: The video codec used to record the videos. The default is `AVC1`, which is the most efficient codec. If you're having compatibility issues due to unavailable codecs (eg on Linux), switch to `mp4v` which is more compatible. - - -## 3. How to record a dataset using the phosphobot teleoperation Meta Quest app? - - - -## 4. Check your dataset - -Datasets are saved on the computer running the phosphobot server at `~/phosphobot/recordings/DATASET_NAME` folder in the phosphobot directory. Explore the recordings in the phosphobot dashboard using the _Dataset Browser_. - -If you added your Hugging Face token in the dashboard, the recorded datasets are **automatically uploaded to your HuggingFace account.** - -Go to your [Hugging Face profile](https://huggingface.co) to see the uploaded datasets. - -## 5. Visualize your dataset - -Once your dataset is uploaded to HuggingFace, you can view it using the [LeRobot Dataset Visualizer](https://huggingface.co/spaces/lerobot/visualize_dataset). - -![LeRobot dataset visualizer](/assets/lerobot_dataset_viz.png) - - - The dataset visualizer only works with the `AVC1` video codec. If you used another codec, you may see black screens in the video preview. - Preview directly the videos files in a video player by opening your recording locally: `~/phosphobot/recordings/lerobot_v2/DATASET_NAME/video`. - - -Looking good? You're ready to train your AI model! - -# What's next - - - How to train an AI model from a dataset you recorded - \ No newline at end of file diff --git a/mintlify/basic-usage/inference.mdx b/mintlify/basic-usage/inference.mdx deleted file mode 100644 index e7266b4..0000000 --- a/mintlify/basic-usage/inference.mdx +++ /dev/null @@ -1,315 +0,0 @@ ---- -title: "Control robot with AI models" -description: "How to make an AI model control your robot?" ---- - -You just [trained an AI model](/basic-usage/training) and now you want to use it to control your robot. Let's see how you can do that. - - - Disclaimer: Letting an AI control your robot carries risk. Clear the area from - pets, people and objects. You are the only one **responsible** for any damage - caused to your robot or its surroundings. - - -## Control your robot with AI from the phosphobot dashboard - -If you trained your model using phosphobot, you can control your robot directly from the phosphobot dashboard. - - - You can fine tune the model in a single click from the dashboard. [Go here to - learn how.](/basic-usage/training) - - -1. **Connect your robots and your cameras** to your computer. **Run the phosphobot server** and go to the phosphobot dashboard in your browser: [http://localhost](http://localhost) - -```bash -phosphobot run -``` - -2. Create a phospho account or log in by clicking on the **Sign in** button in the top right corner. -3. _(If not already done)_ Add your Hugging Face token in the **Admin Settings** tab with **Write authorization**. [Read the full guide here](/basic-usage/dataset-recording#1-set-up-your-hugging-face-token). -4. In the **AI Training and Control** section, enter the instruction you want to give the robot and click on **Go to AI Control**. Accept the disclaimer. You'll be redirected to the AI Control page. - -![phosphobot ai control panel](/assets/phosphobot-ai-control.png) - -5. In the **Model ID**, enter the name of your model on Hugging Face (example: `phospho-app/YOUR_DATASET_NAME-A_RANDOM_ID`). Double check the camera angles so that they match the ones you used to record the dataset. - -6. Click on **Start AI Control**. Please wait: the first time, starting a GPU instance and loading the model can take up to 60 seconds. Then the robot will start moving. - -You can pause, resume, and stop the AI control at any time by clicking on the control buttons. - -If your model supports it, you can edit the **Instruction** field to change the instruction and run it again to see how the robot reacts. - - - Join the Discord to ask questions and share your demos! - - -## How to control your robot with an AI model from a python script? - -If you're using a different model or want more fine-grained control, you can use the `phosphobot` python module to control your robot with an AI model. - -### 1. Setup an inference server - -First, you need to setup an inference server. This server runs on a beefy machine with a GPU that can run the AI model. It can be your own machine, a cloud server, or a dedicated server. - - - If you choose a remote location, chose the closest location to minimize - latency. - - -To setup the inference server, follow the instructions in the link below: - - - How to setup the inference server? - - -### 2. Call your inference server from a python script - -Open a terminal and run the [phosphobot server.](/installation) - -```bash -phosphobot run -``` - -Then, create a new python file called `inference.py`. Inside, copy the content of an example script below. - - - - - -### Example script for ACT - -```python -# pip install --upgrade phosphobot -PHOSPHOBOT_API_URL = "http://localhost:80" -allcameras = AllCameras() -time.sleep(1) # Camera warmup - -# Connect to ACT server -model = ACT() - -while True: - # Capture multi-camera frames (adjust camera IDs and size as needed) - images = [allcameras.get_rgb_frame(0, resize=(240, 320))] - - # Get current robot state - state = httpx.post(f"{PHOSPHOBOT_API_URL}/joints/read").json() - - # Generate actions - actions = model( - {"state": np.array(state["angles"]), "images": np.array(images)} - ) - - # Execute actions at 30Hz - for action in actions: - httpx.post( - f"{PHOSPHOBOT_API_URL}/joints/write", json={"angles": action[0].tolist()} - ) - time.sleep(1 / 30) -``` - - - - -### Example script for Pi0.5 - -```python -#pip install --upgrade phosphobot -from phosphobot.camera import AllCameras -import httpx -from phosphobot.am import Pi0 - -import time -import numpy as np - -# Connect to the phosphobot server -PHOSPHOBOT_API_URL = "http://localhost:80" - -# Get a camera frame -allcameras = AllCameras() - -# Need to wait for the cameras to initialize -time.sleep(1) - -# Instantiate the model -model = Pi0(server_url="YOUR_SERVER_URL") - -while True: - # Get the frames from the cameras - # We will use this model: PLB/pi0-so100-orangelegobrick-wristcam - # It requires 2 cameras (a context cam and a wrist cam) - images = [ - allcameras.get_rgb_frame(camera_id=0, resize=(240, 320)), - allcameras.get_rgb_frame(camera_id=1, resize=(240, 320)), - ] - - # Get the robot state - state = httpx.post(f"{PHOSPHOBOT_API_URL}/joints/read").json() - - inputs = { - "state": np.array(state["angles_rad"]), - "images": np.array(images), - "prompt": "Pick up the orange brick", - } - - # Go through the model - actions = model(inputs) - - for action in actions: - # Send the new joint postion to the robot - httpx.post( - f"{PHOSPHOBOT_API_URL}/joints/write", json={"angles": action.tolist()} - ) - # Wait to respect frequency control (30 Hz) - time.sleep(1 / 30) -``` - - - - -### Example script for gr00t-n1 - -You need to install the `torch` and `zmq` libraries. - -```bash -pip install torch zmq -``` - -You also need to run a GPU with the gr00t model and inference server. [Use this repo](https://github.com/phospho-app/Isaac-GR00T) to run the server. - -```python -# pip install --upgrade phosphobot -# /// script -# requires-python = ">=3.10" -# dependencies = [ -# "cv2", -# "phosphobot", -# "torch", -# "zmq", -# ] -# /// -import time - -import cv2 -import numpy as np - -from phosphobot.am import Gr00tN1 -import httpx -from phosphobot.camera import AllCameras - -host = "YOUR_SERVER_IP" # Change this to your server IP (this is the IP of the machine running the Gr00tN1 server using a GPU) -port = 5555 - -# Change this with your task description -TASK_DESCRIPTION = ( - "Pick up the green lego brick from the table and put it in the black container." -) - -# Connect to the phosphobot server, this is different from the server IP above -PHOSPHOBOT_API_URL = "http://localhost:80" - -allcameras = AllCameras() -time.sleep(1) # Wait for the cameras to initialize - -while True: - images = [ - allcameras.get_rgb_frame(camera_id=0, resize=(320, 240)), - allcameras.get_rgb_frame(camera_id=1, resize=(320, 240)), - ] - - for i in range(0, len(images)): - image = images[i] - if image is None: - print(f"Camera {i} is not available.") - continue - - image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) - - # Add a batch dimension (from (240, 320, 3) to (1, 240, 320, 3)) - converted_array = np.expand_dims(image, axis=0) - converted_array = converted_array.astype(np.uint8) - images[i] = converted_array - - # Create the model, you might need to change the action keys based on your model, these can be found in the experiment_cfg/metadata.json file of your Gr00tN1 model - model = Gr00tN1(server_url=host, server_port=port) - - response = httpx.post(f"{PHOSPHOBOT_API_URL}/joints/read").json() - state = response["angles_rad"] - # Take a look at the experiment_cfg/metadata.json file in your Gr00t model and check the names of the images, states, and observations - # You may need to adapt the obs JSON to match these names - # The following JSON should work for one arm and 2 video cameras - obs = { - "video.image_cam_0": images[0], - "video.image_cam_1": images[1], - "state.arm": state[0:6].reshape(1, 6), - "annotation.human.action.task_description": [TASK_DESCRIPTION], - } - - action = model.sample_actions(obs) - - for i in range(0, action.shape[0]): - httpx.post( - f"{PHOSPHOBOT_API_URL}/joints/write", json={"angles": action[i].tolist()} - ) - # Wait to respect frequency control (30 Hz) - time.sleep(1 / 30) -``` - - - - -### Other models - -You can implement the `ActionModel` class with your own logic [here](https://github.com/phospho-app/phosphobot/blob/main/phosphobot/am/models.py). - -For more information, check out the implementation [here](https://github.com/phospho-app/phosphobot/blob/main/phosphobot/am/models.py). - - - - - -To run the script, install the phosphobot python module. Then, run the script. - -```bash -pip install phosphobot -python your_script.py -``` - -## What's next? - - - - Join the Discord to ask questions, get help from others and get updates (we - ship almost daily) - - - Learn more about Robotics AI models - - diff --git a/mintlify/basic-usage/teleop.mdx b/mintlify/basic-usage/teleop.mdx deleted file mode 100644 index 27a698c..0000000 --- a/mintlify/basic-usage/teleop.mdx +++ /dev/null @@ -1,170 +0,0 @@ ---- -title: "Control your robot arm" -description: "How to remote control a robot arm with a keyboard, in VR, with an API or a leader arm?" ---- - -import InstallCode from '/snippets/install-code.mdx'; -import GetMQApp from '/snippets/get-mq-app.mdx'; -import TeleopInstructions from '/snippets/teleop-instructions.mdx'; - -phosphobot lets you control the SO-100 or SO-101 robot arm with: - -- your keyboard -- a game controller -- the HTTP API -- the Meta Quest app -- a leader arm or another follower arm -- by playing back a recording -- with AI - -You can also control other kind of robots and [write your own controllers.](https://github.com/phospho-app/phosphobot/tree/main/phosphobot) - - - If you don't have a robot arm, you can get hardware [here](https://robots.phospho.ai/). - - -## Prerequisites - -To control your robot, you need to install [phosphobot](/installation). Make sure the robot was [calibrated with phosphobot](/so-100/quickstart). No robot? Get one [here](https://robots.phospho.ai/?utm_source=docs). - - - -## Control with a keyboard - -Use `phosphobot` to control your SO-100, SO-101, or any supported robot arm with your keyboard, using the **arrow keys** to move the robot arm in the desired direction. - - - - -Go to `localhost` in your web browser to access the phosphobot dashboard. Go to **Keyboard Control**. Click on the **Start robot** button to start controlling the robot arm with your keyboard. Click on the **Stop robot** button to stop controlling the robot arm with your keyboard. - - -## Control with a game controller - -You can control your robot arm using standard game controllers (Xbox, PlayStation, or similar) for more intuitive and ergonomic control. - - - -1. **Connect your controller** to your computer via USB or Bluetooth. - -2. Go to `localhost` in your web browser to access the phosphobot dashboard. Navigate to the **Control** page and select the **Gamepad control** tab. - -3. **Press any button** on your controller to activate it. The dashboard will detect your controller automatically. - -4. The robot will **start moving automatically** once the gamepad is detected. - -**Control mapping:** -- **Left stick**: Rotate (X-axis) and move up/down (Y-axis) -- **Right stick**: Strafe left/right (X-axis) and move forward/backward (Y-axis) -- **D-Pad or face buttons (ABXY)**: Wrist pitch and roll control -- **L1/R1 bumpers**: Toggle gripper open/close -- **L2/R2 triggers**: Analog gripper control (0-100%) -- **Start/Menu**: Move arm to sleep position - - - The gamepad control feature requires a browser with Gamepad API support (Chrome/Edge 21+, Firefox 29+, Safari 10.1+). - - - -## Control in Virtual Reality - -Control your robot arm in virtual reality with the Meta Quest app. This app lets you control your robot arm in VR, using the Meta Quest controllers. - - - - - - - - - -## Control with a leader arm - -phosphobot supports one or multiple leader arms. A leader arm is a robot arm that you can use to control another robot arm, called the follower arm. The leader arm is used to control the follower arm in real time, allowing you to control the follower arm as if you were physically present. - - - -1. Plug the leader arm. - -2. Calibrate the leader arm the same way you calibrated the robot arm. - -3. Go to the phosphobot dashboard, in the **Control** page and click on the **Leader Arm** section. - -You have more than one pair leader/follower arms, using the + button to add more - -### Does the leader arm need to be calibrated? - -Yes, the leader arm needs to be calibrated the same way you calibrated the robot arm. The calibration is done in the **Calibration** page of the phosphobot dashboard. - -### Can you do bimanual teleoperation with two leader arms? Can I control two robot arms with two leader arms? - -Yes, you can control two robot arms with two leader arms. You need to plug both leader arms and calibrate them the same way you calibrated the robot arms. Then, go to the **Control** page of the phosphobot dashboard and click on the **Leader Arm** section. You can add multiple leader arms by clicking on the + button. - -### Is the leader arm mandatory? - -No, the leader arm is optional. You can control the robot arm with the keyboard, the game controller, the HTTP API, or the Meta Quest app without a leader arm. The leader arm is just an additional way to control the robot arm. - -## Control with the HTTP API - -Once your phosphobot server is running, you can send your first command to the robot arm. - - - Make sure the robot is well fixed and the area around is clear before sending - any command. - - -1. Go to the interactive API docs on the phosphobot dashboard: [localhost/docs](http://localhost/docs). On the control module, the address [phosphobot.local/docs](http://phosphobot.local/docs). This page lets you send commands to the robot arm. - -2. Trigger the `/move/init` endpoint to initialize the robot (click `Try it out` and then press `Execute`). - -3. Your robot arm moves to the default position. _It's alive!_ ๐ŸŽ‰ - -4. Now, you can call the `/move/absolute` endpoint to move the robot to a specific position. The distances are in centimeters, and the angles in degrees. - - -## Control by playing back a recording - -You can [record episodes](./dataset-recording.mdx) and then play them back to control the robot arm using [the recording feature](../recording/play-recording). - -## Control with AI - -You can [train an AI model](./training) and then run [AI control.](./inference) Basically, the AI model will predict the next position of the robot based on the previous position and what the cameras sees. This is a whole [area of research.](../learn/policies) - -## What's next? - - - How to record a dataset with your robot - diff --git a/mintlify/basic-usage/training.mdx b/mintlify/basic-usage/training.mdx deleted file mode 100644 index fb91b8c..0000000 --- a/mintlify/basic-usage/training.mdx +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: "Train a robotics AI model" -description: "How to train a robotics AI model with a dataset?" ---- - -To train an AI model for your robot, you need a robotics dataset. For that, you need to have first [recorded a dataset](/basic-usage/dataset-recording). - -Don't have a dataset? Use one of ours: `phospho-ai/so100-tictactoe` - - - -## Train GR00T-N1-2B in one click from the phosphobot dashboard - -You can fine-tune [Nvidia GR00T-N1-2B](https://huggingface.co/nvidia/GR00T-N1-2B) on your dataset right from the phosphobot dashboard. This is the easiest way to train a AI robotics model. - -1. Launch the phosphobot server and go to the phosphobot dashboard in your browser: [http://localhost](http://localhost) - -```bash -phosphobot run -``` - -2. Create a phospho account or log in by clicking on the **Sign in** button in the top right corner. -3. _(If not already done)_ Add your Hugging Face token in the **Admin Settings** tab with **Write authorization**. This will sync your datasets to Hugging Face. Then, record a dataset using teleoperation. [Read the full guide here](/basic-usage/dataset-recording#1-set-up-your-hugging-face-token). - - - Garbage in, garbage out. Our tests show that training works with about 30 - episodes. It's better for the task to be specific. Have good lighting and - similar setup. - - -4. In the **AI Training and Control** section, enter the the name of your dataset on Hugging Face (example: `PLB/simple-lego-pickup-mono-2`). - -![phosphobot training cloud](/assets/phosphobot-aitraining.png) - -5. Hit the **Train AI Model** button. Your model starts training. Training can take up to 3 hours. Follow the training using the button **View trained models**. - -Your trained model is uploaded to HuggingFace [on the phospho-app account](https://huggingface.co/phospho-app). Its name is something like `phospho-app/YOUR_DATASET_NAME-A_RANDOM_ID`. - -Next up, you can start controlling your robot with the trained model. - - - -You can add you Weights & Biases token to track your training metrics. Go to the **Admin Settings** tab and add your WandB token (get your token [here](https://wandb.ai/authorize)). - -You can also tweak the training parameters. To do so, you must have your phosphobot server running and being logged in in your dashboard. -Then, use the [/training/start-training](/ai-training/start-training) endpoint to pass more training parameters. - - - - - - Let a trained AI model control your robot - - - Join the Discord for support and updates (we ship almost daily) - - - - -## How to train the ACT (Action Chunking Transformer) model with LeRobot? - -The ACT model is a transformer-based model that learns to chunk actions in a sequence. It is trained on a dataset of action sequences and their corresponding chunked actions. - -LeRobot is a research-oriented library by Hugging Face that provides a simple interface to train AI models. It is still a work in progress, but it is already very powerful. - -Follow [our guide](/learn/ai-models#train-an-act-model-locally-with-lerobot) to train the ACT model on your dataset. - - - Train your ACT model with LeRobot - - - -## How to train Pi0.5 (Pi-Zero point five) model with the SO-100 robot arm? - -[Pi0.5](https://www.physicalintelligence.company/blog/pi05) is a powerful VLA (Vision Language Action model) by Physical Intelligence. They released an open weight model that can be trained on your own datasets. - -This VLA promises **open world generalization** and requires lots of data to properly train, think datasets of several hours. - -You can still **use phosphobot to fine-tune the Pi0.5 model on your own dataset**. - -We will use the phosphobot cloud for this. - -Just head over to the **AI training** tab on phosphobot. It should look a little something like this. - -![AI training tab on phosphobot](/assets/training-pi0.5.png) - -In the top left, enter your **dataset name**, it should be on Hugging Face. For example: `LegrandFrederic/Marker_pickup_piper`. - -Then, select the **Pi0.5 model** from the dropdown menu in the upper right. - -Change the training parameters such as the image keys to match your dataset. If your dataset only contains one camera, you should set the image keys to `["image.name.in.the.dataset"]`, most likely `["observation.images.main"]` if you recorded it with phosphobot. - - -Pro tips: -- Set your wandb token in the admin panel to track your training metrics. -- Use a dataset with at least 15 minutes of data. -- Make sure to set a prompt when recording your dataset. - - -Finally, hit the **Train AI Model** button. Your model starts training. Training can take several hours depending on your dataset size. Follow the training using the button **View trained models**. - -# Next steps - -Test the model you just trained on your robot. See the [Use AI models](/basic-usage/inference) page for more information. - - - Let a trained AI model control your robot - diff --git a/mintlify/camera/cameras-refresh.mdx b/mintlify/camera/cameras-refresh.mdx deleted file mode 100644 index bea390e..0000000 --- a/mintlify/camera/cameras-refresh.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /cameras/refresh ---- diff --git a/mintlify/camera/frames.mdx b/mintlify/camera/frames.mdx deleted file mode 100644 index 375d4f0..0000000 --- a/mintlify/camera/frames.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: get /frames ---- diff --git a/mintlify/camera/video-feed-for-camera.mdx b/mintlify/camera/video-feed-for-camera.mdx deleted file mode 100644 index 6f2e221..0000000 --- a/mintlify/camera/video-feed-for-camera.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: get /video/{camera_id} ---- diff --git a/mintlify/control/calibration-sequence.mdx b/mintlify/control/calibration-sequence.mdx deleted file mode 100644 index 2b7d5db..0000000 --- a/mintlify/control/calibration-sequence.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /calibrate ---- \ No newline at end of file diff --git a/mintlify/control/end-effector-state.mdx b/mintlify/control/end-effector-state.mdx deleted file mode 100644 index cd33c51..0000000 --- a/mintlify/control/end-effector-state.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /end-effector/read ---- \ No newline at end of file diff --git a/mintlify/control/gravity-start.mdx b/mintlify/control/gravity-start.mdx deleted file mode 100644 index 86f1f5a..0000000 --- a/mintlify/control/gravity-start.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /gravity/start ---- \ No newline at end of file diff --git a/mintlify/control/gravity-stop.mdx b/mintlify/control/gravity-stop.mdx deleted file mode 100644 index 833b9b2..0000000 --- a/mintlify/control/gravity-stop.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /gravity/stop ---- \ No newline at end of file diff --git a/mintlify/control/mimicking-robots.mdx b/mintlify/control/mimicking-robots.mdx deleted file mode 100644 index 70c91f8..0000000 --- a/mintlify/control/mimicking-robots.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /move/mimick ---- \ No newline at end of file diff --git a/mintlify/control/move-absolute-position.mdx b/mintlify/control/move-absolute-position.mdx deleted file mode 100644 index cf0e700..0000000 --- a/mintlify/control/move-absolute-position.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /move/absolute ---- \ No newline at end of file diff --git a/mintlify/control/move-init.mdx b/mintlify/control/move-init.mdx deleted file mode 100644 index ed56ec6..0000000 --- a/mintlify/control/move-init.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /move/init ---- \ No newline at end of file diff --git a/mintlify/control/move-leader-start.mdx b/mintlify/control/move-leader-start.mdx deleted file mode 100644 index ecebac8..0000000 --- a/mintlify/control/move-leader-start.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /move/leader/start ---- \ No newline at end of file diff --git a/mintlify/control/move-leader-stop.mdx b/mintlify/control/move-leader-stop.mdx deleted file mode 100644 index 7ba5103..0000000 --- a/mintlify/control/move-leader-stop.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /move/leader/stop ---- \ No newline at end of file diff --git a/mintlify/control/move-relative-position.mdx b/mintlify/control/move-relative-position.mdx deleted file mode 100644 index 5ac255e..0000000 --- a/mintlify/control/move-relative-position.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /move/relative ---- \ No newline at end of file diff --git a/mintlify/control/move-teleoperation-ws.mdx b/mintlify/control/move-teleoperation-ws.mdx deleted file mode 100644 index 19af796..0000000 --- a/mintlify/control/move-teleoperation-ws.mdx +++ /dev/null @@ -1,80 +0,0 @@ ---- -icon: "socks" -title: "Teleoperation Control (WebSocket)" -description: "High frequency control of the robot arm via WebSocket connection for real-time updates and commands." ---- - - -``` - ws://localhost/move/teleop/ws -``` - - - -```json -{ - "nb_actions_received": 5, - "is_object_gripped": false, - "is_object_gripped_source": "left" // or "right" -} -``` - - - -## Overview - -This WebSocket endpoint allows real-time teleoperation of a robot arm. The connection enables high-frequency command and status updates to ensure precise control and immediate feedback. - -## Data Flow - -**Incoming Data**: The client sends JSON-formatted control data for the robot's movement. -**Outgoing Data**: The server sends JSON-formatted status updates, including the number of actions received and the current gripped status of the object. - - -## Control Data - -The client sends control data in JSON format, which includes the following fields: - - -```json -{ - "x": 0.5, - "y": 1.2, - "z": -0.7, - "rx": 45.0, - "ry": 30.0, - "rz": 90.0, - "open": 1, - "source": "left" // or "right" -} -``` - -See the [Teleoperation control endpoint](/control/move-teleoperation) endpoint for more details on each field. - -## Status Updates - -Status updates are sent back to the client every second or when there is a change in the gripped status of the object. - - -```json -{ - "nb_actions_received": 5, - "is_object_gripped": false, - "is_object_gripped_source": "left" -} -``` - - - -## Error Handling - -If the received data cannot be decoded as JSON, an error message is sent back to the client: - -```json -{ - "error": "JSON decode error: " -} -``` - diff --git a/mintlify/control/move-teleoperation.mdx b/mintlify/control/move-teleoperation.mdx deleted file mode 100644 index 2ef367e..0000000 --- a/mintlify/control/move-teleoperation.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /move/teleop ---- \ No newline at end of file diff --git a/mintlify/control/read-joints.mdx b/mintlify/control/read-joints.mdx deleted file mode 100644 index ac7d884..0000000 --- a/mintlify/control/read-joints.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /joints/read ---- \ No newline at end of file diff --git a/mintlify/control/read-temperature.mdx b/mintlify/control/read-temperature.mdx deleted file mode 100644 index f47e1ec..0000000 --- a/mintlify/control/read-temperature.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /temperature/read ---- \ No newline at end of file diff --git a/mintlify/control/read-torques.mdx b/mintlify/control/read-torques.mdx deleted file mode 100644 index c6106f5..0000000 --- a/mintlify/control/read-torques.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /torque/read ---- \ No newline at end of file diff --git a/mintlify/control/turn-torque.mdx b/mintlify/control/turn-torque.mdx deleted file mode 100644 index e0687db..0000000 --- a/mintlify/control/turn-torque.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /torque/toggle ---- \ No newline at end of file diff --git a/mintlify/control/write-joints.mdx b/mintlify/control/write-joints.mdx deleted file mode 100644 index b3737f3..0000000 --- a/mintlify/control/write-joints.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /joints/write ---- \ No newline at end of file diff --git a/mintlify/control/write-temperature.mdx b/mintlify/control/write-temperature.mdx deleted file mode 100644 index 9ca3bad..0000000 --- a/mintlify/control/write-temperature.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /temperature/write ---- \ No newline at end of file diff --git a/mintlify/examples/control.mdx b/mintlify/examples/control.mdx deleted file mode 100644 index 147f6f5..0000000 --- a/mintlify/examples/control.mdx +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: "Simple controls" -description: "How to control the robot arm with API calls." ---- - -[phosphobot](../installation.mdx) provides a simple API to control the robot arm. You can use it to move the robot arm, open or close the gripper, and more. - - - -All code examples can be found in our open source repo [here](https://github.com/phospho-app/phosphobot). - - -## Square movement - - - - - - - -This implementation uses the /move/relative endpoint to move the robot in a square. -We simply indicate where we want to move the robot relative to its current position. - -```python square.py -import time -import requests - -# Configurations -PI_IP: str = "127.0.0.1" -PI_PORT: int = 80 -NUMBER_OF_SQUARES: int = 100 - - -# Function to call the API -def call_to_api(endpoint: str, data: dict = {}): - response = requests.post(f"http://{PI_IP}:{PI_PORT}/move/{endpoint}", json=data) - return response.json() - - -# Example code to move the robot in a square of 4 cm x 4 cm -# 1 - Initialize the robot -call_to_api("init") -print("Initializing robot") -time.sleep(2) - -# We move it to the top left corner of the square -call_to_api( - "relative", {"x": 0, "y": -3, "z": 0.03, "rx": 0, "ry": 0, "rz": 0, "open": 0} -) -print("Moving to top left corner") -time.sleep(0.2) - -# With the move relative endpoint, we can move relative to its current position -# 2 - We make the robot follow a 3 cm x 3 cm square -for _ in range(NUMBER_OF_SQUARES): - # Move to the top right corner - call_to_api( - "relative", {"x": 0, "y": 3, "z": 0, "rx": 0, "ry": 0, "rz": 0, "open": 0} - ) - print("Moving to top right corner") - time.sleep(0.2) - - # Move to the bottom right corner - call_to_api( - "relative", {"x": 0, "y": 0, "z": -3, "rx": 0, "ry": 0, "rz": 0, "open": 0} - ) - print("Moving to bottom right corner") - time.sleep(0.2) - - # Move to the bottom left corner - call_to_api( - "relative", {"x": 0, "y": -3, "z": 0, "rx": 0, "ry": 0, "rz": 0, "open": 0} - ) - print("Moving to bottom left corner") - time.sleep(0.2) - - # Move to the top left corner - call_to_api( - "relative", {"x": 0, "y": 0, "z": 3, "rx": 0, "ry": 0, "rz": 0, "open": 0} - ) - print("Moving to top left corner") - time.sleep(0.2) -``` - - - -## Circle movement - -### Slow - - - - - - - -Since it's harder to control the robot's position using relative movements to create a circle, we use the absolute movement instead. -We calculate the position of the robot in the circle using the sin and cos functions to create a circular motion. - -```python circle_slow.py -import math -import time -import requests - -# Configurations -PI_IP: str = "127.0.0.1" -PI_PORT: int = 80 -NUMBER_OF_STEPS: int = 10 -NUMBER_OF_CIRCLES: int = 15 - - -# Function to call the API -def call_to_api(endpoint: str, data: dict = {}): - response = requests.post(f"http://{PI_IP}:{PI_PORT}/move/{endpoint}", json=data) - return response.json() - - -# Example code to move the robot in a circle -# 1 - Initialize the robot -call_to_api("init") -print("Initializing robot") -time.sleep(2) - -# With the move absolute endpoint, we can move the robot in an absolute position -# 2 - We move the robot in a circle with a diameter of 4 cm -for _ in range(NUMBER_OF_CIRCLES): - for step in range(NUMBER_OF_STEPS): - position_y: float = 4 * math.sin(2 * math.pi * step / NUMBER_OF_STEPS) - position_z: float = 4 * math.cos(2 * math.pi * step / NUMBER_OF_STEPS) - call_to_api( - "absolute", - { - "x": 0, - "y": position_y, - "z": position_z, - "rx": 0, - "ry": 0, - "rz": 0, - "open": 0, - }, - ) - print(f"Step {step} - x: 0, y: {position_y}, z: {position_z}") - time.sleep(0.03) -``` - - - -### Fast - - - - - - - -To quicken the robots movements, we lower the number of steps in the circle. -We also increase the sleep time between each step to avoid the robot moving too fast. - -```python circle_fast.py -import math -import time -import requests - -# Configurations -PI_IP: str = "127.0.0.1" -PI_PORT: int = 80 -NUMBER_OF_STEPS: int = 10 -NUMBER_OF_CIRCLES: int = 15 - - -# Function to call the API -def call_to_api(endpoint: str, data: dict = {}): - response = requests.post(f"http://{PI_IP}:{PI_PORT}/move/{endpoint}", json=data) - return response.json() - - -# Example code to move the robot in a circle -# 1 - Initialize the robot -call_to_api("init") -print("Initializing robot") -time.sleep(2) - -# With the move absolute endpoint, we can move the robot in an absolute position -# 2 - We move the robot in a circle with a diameter of 4 cm -for _ in range(NUMBER_OF_CIRCLES): - for step in range(NUMBER_OF_STEPS): - position_y: float = 4 * math.sin(2 * math.pi * step / NUMBER_OF_STEPS) - position_z: float = 4 * math.cos(2 * math.pi * step / NUMBER_OF_STEPS) - call_to_api( - "absolute", - { - "x": 0, - "y": position_y, - "z": position_z, - "rx": 0, - "ry": 0, - "rz": 0, - "open": 0, - }, - ) - print(f"Step {step} - x: 0, y: {position_y}, z: {position_z}") - time.sleep(0.2) -``` - - diff --git a/mintlify/examples/mcp-for-robotics.mdx b/mintlify/examples/mcp-for-robotics.mdx deleted file mode 100644 index 491275d..0000000 --- a/mintlify/examples/mcp-for-robotics.mdx +++ /dev/null @@ -1,184 +0,0 @@ ---- -title: 'MCP Robotics: Controlling Robots with LLMs using phosphobot MCP' -description: "Connect a Large Language Model to a robot using the Model Context Protocol (MCP) and phosphobot." ---- - -import InstallCode from '/snippets/install-code.mdx'; - -This guide provides the essential code and instructions to get started with **MCP for robotics**. Using [phosphobot](../installation) and the **Model Context Protocol (MCP)**, you can connect a Large Language Model (LLM) like Claude to a robot, enabling it to access camera feeds and trigger actions through a standardized interface. - - - -- ๐Ÿ”— **GitHub Repository**: [phospho-mcp-server](https://github.com/phospho-app/phospho-mcp-server) -- ๐Ÿ”Œ **Core Protocol**: [Model Context Protocol](https://github.com/modelcontextprotocol/python-sdk) -- ๐Ÿค– **Key Tools**: [Claude](https://claude.ai/download) and [phosphobot](https://github.com/phospho-app/phosphobot) - -## What is the Model Context Protocol (MCP)? - -The **Model Context Protocol (MCP)** is an open standard that connects Large Language Models to real-world tools and data sources. - -Think of it **like an USB-C port for AI**, a universal translator between an AI and any application. - -MCP allows an LLM to "plug into" different systems, giving it the power to see, reason, and, most importantly, act. For **MCP robotics**, this means giving an AI the hands and eyes to interact with the physical world. - -### What are the key concepts of MCP? - -- **Tools** are real Python functions that the model can call to perform actions. - - Example: pickup_object("banana") to move a robot arm. -- **Resources** are read-only data sources, accessible via URIs. - - Example: file://home/user/notes.txt to expose the content of a local text file. -- **Host / Client / Server architecture** - - Host = the LLM applications that start the connection (e.g. Claude) - - Client = the MCP protocol that conects tools to the LLM - - Server = your app exposing tools/resources (e.g. the phosphobot MCP server) -- **Lifespan** lets you run startup/shutdown code (e.g., to launch a robot process), and share context across tools. - - -### Why MCP for Robotics? - -Before MCP, connecting an AI to a robot required custom, complex integrations for each specific model and robot. **MCP robotics** changes this by creating a universal standard. - -- **Standardized Control**: Any MCP-compatible LLM can control any MCP-enabled robot. -- **Simplified Integration**: It removes the need for fragmented, one-off solutions, creating a "plug-and-play" ecosystem for AI and robotics. [14] -- **Real-World Interaction**: It bridges the gap between AI's reasoning capabilities and a robot's physical actions, enabling tasks like object manipulation based on visual input. - -With **robots MCP**, developers can build powerful applications where an AI can perceive its environment and execute physical tasks. - -## How phosphobot Implements MCP Robotics - -The **phosphobot MCP** integration is a practical example of this protocol in action. The basic demo exposes two primary capabilities to the LLM: - -- **Camera Stream**: A tool that retrieves the current frame from a webcam, giving the LLM vision. -- **Replay Tool**: A tool that triggers a pre-recorded robot action, like picking up an object. - -The **phosphobot MCP server** manages these tools and the communication with the robot's local API. - -## Getting Started with phosphobot MCP - -Follow these steps to set up your **MCP robotics** environment. - -### Prerequisites - -- [Claude for Desktop](https://support.anthropic.com/en/articles/10065433-installing-claude-for-desktop) is installed. -- Python and `git` are installed on your system. -- You are comfortable using a command-line interface. - -### Step 1: Install and Run phosphobot - -**[phosphobot](https://docs.phospho.ai)** is an open-source platform that allows you to control robots, record data, and train robotics AI models. - -First, [install phosphobot](../installation) with the command for your OS: - - - -Next, run the phosphobot server, which will listen for commands from the MCP server. -```bash -phosphobot run -``` - -### Step 2: Install the phosphobot MCP Server - -This server exposes the robot's controls to Claude. We recommend installing it with **uv**. - -1. **Install uv**, a fast Python package installer: - -```bash macOS and Linux -curl -LsSf https://astral.sh/uv/install.sh | sh -``` -```powershell Windows -powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" -``` - - -2. **Clone the repository and install the server**: -```bash -# Clone the phospho MCP server repository -git clone https://github.com/phospho-app/phospho-mcp-server.git - -# Navigate to the correct directory -cd phospho-mcp-server/phospho-mcp-server - -# Install and run the MCP server -uv run mcp install server.py -``` - -This command starts the `phospho` MCP server and registers its tools with Claude. When you open the Claude desktop app, you will see the server and its tools available for use. - -## How It Works: Technical Overview - -The **phosphobot MCP server** communicates with the local phosphobot instance via its REST API (defaulting to `http://localhost:80`). - -- `GET /frames`: Fetches the latest camera image. -- `POST /recording/play`: Executes a pre-recorded robot action. - -The `PhosphoClient` class manages this communication. If you run phosphobot on a port other than 80, you must update the base URL in the `tools/phosphobot.py` file. - -## Testing Your MCP Robotics Setup - -You can test the server with the MCP inspector by running: -```bash -uv run mcp dev server.py -``` - -### Example 1: Using the Robot's Camera - -Ask Claude a question that requires vision: -> โ€œWhat do you see on my desk?โ€ - -Claude will use the `get_camera_frame` tool to answer. - -### Example 2: Controlling the Robot's Actions - -Give Claude a command: -> โ€œPick up the bananaโ€ - -Claude will use the `pickup_object` tool to perform the action. - - -## Available Tools - -### `pickup_object` -Triggers a pre-recorded robotic action. - -```python -@mcp.tool() -def pickup_object(name: Literal["banana", "black circle", "green cross"]) -> str: - """Launches a replay episode to simulate picking up a named object.""" - ... -``` - -### `get_camera_frame` -Captures a JPEG image from the phosphobot camera. - -```python -@mcp.tool() -def get_camera_frame() -> Image: - """Captures a JPEG image from phosphobot's camera via the /frames endpoint.""" - ... -``` - -## FAQ - -**Q: What is `phosphobot`?** -A: `phosphobot` is an open-source platform for **robotics** that helps you control robots, collect data, and train AI models for robotic tasks. - -**Q: What is `phosphobot mcp`?** -A: `phosphobot mcp` refers to the integration of the phosphobot platform with the Model Context Protocol. It allows an LLM like Claude to control a robot managed by phosphobot by using standardized tools for actions and camera feeds. - -**Q: Can I use this with a physical robot?** -A: Yes. `phosphobot` is designed to control physical robots, allowing you to bridge the gap between AI and hardware. - -**Q: Can it only use pre-recorded actions?** -A: No, while the demo uses pre-recorded actions for simplicity, you can extend the `phosphobot MCP server` to include real-time control commands or trigger any AI model trained with phosphobot (eg: ACT, gr00t). - -## Additional Resources - -- [Model Context Protocol (MCP) Official GitHub](https://github.com/modelcontextprotocol/python-sdk) -- [phosphobot Official Documentation](https://docs.phospho.ai/installation) - diff --git a/mintlify/examples/teleop-from-anywhere.mdx b/mintlify/examples/teleop-from-anywhere.mdx deleted file mode 100644 index 14dcaea..0000000 --- a/mintlify/examples/teleop-from-anywhere.mdx +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: "VR Control from anywhere in the world" -description: "How to control your robot arm from anywhere in the world using ngrok and a Meta Quest 2, Pro, 3 or 3s" ---- - - -import InstallCode from '/snippets/install-code.mdx'; -import GetMQApp from '/snippets/get-mq-app.mdx'; - -Control your robot arm from anywhere in the world using ngrok and a Meta Quest 2, Pro, 3 or 3s. This lets you collect datasets with manipulators in different locations. - -## Prerequisites - -1. You need a robot arm such as the SO-100, the SO-101, or [other compatible hardware](https://github.com/phospho-app/phosphobot). Get the [phosphot starter pack here](https://robots.phospho.ai). -2. Install [the phosphobot software](/installation) on your computer. - - - -3. Connect robots to your computer. Start the phosphobot server. - - ```bash - phosphobot run - ``` - -4. Complete the [quickstart](/so-100/quickstart) and check that you can [control your robot](/basic-usage/teleop). -5. The **[phosphobot teleoperation app](/examples/teleop)** is installed on your **Meta Quest 2, Pro, 3 or 3s**. > - - - -6. An **ngrok account**. [Sign up here](https://ngrok.com/) (*or use an alternative like Cloudflare Tunnel*). -7. The **ngrok CLI** installed on your device. [Download it here](https://ngrok.com/download) - -## 1. Authenticate ngrok - -To use ngrok, you need to authenticate your account. Open a terminal and run the following command: - - ```bash - ngrok authtoken YOUR_AUTH_TOKEN - ``` - - Replace `YOUR_AUTH_TOKEN` with the token provided in your **ngrok dashboard**. - - -## 2. Create an ngrok Tunnel for your control module - -1. Ensure your phosphobot server is running and connected to the internet. If you're using the control module, turn it on. - -2. SSH into your phosphobot. By default, the password is `password123` - -```bash -ssh phosphobot@phosphobot.local -``` - -3. Run the following command to create a tunnel: - - ```bash - ngrok http 80 - ``` - - *This command tells ngrok to forward traffic from the internet to your local server running on port 80.* - -4. Once the tunnel is active, ngrok will display a forwarding URL in the terminal, such as: - - ``` - Forwarding https://abc123.ngrok.io -> http://localhost:80 - ``` - - This URL is publicly accessible from anywhere in the world and will remain active while the ngrok tunnel is running. Turn off the server to stop the tunnel. - -## 3. Access Your Teleoperation App Remotely - -1. Copy the ngrok `https://` URL displayed in your terminal (e.g., `https://abc123.ngrok.io`). -2. Share this URL with users who need remote access to your teleoperation app. -3. Open the URL in a browser to access the **Admin panel**. -4. In the Meta Quest app, go to `Settings` and enter the **ngrok URL** to connect. - -Ensure your local server remains running while the ngrok tunnel is active. Closing the server will break the connection. - - -## 4. Secure Your Tunnel - -By default, ngrok tunnels are public, meaning anyone with the URL can control your robot and access datasets. - -**To secure your tunnel**: -1. **Add basic authentication** -Run the following command to require a username and password to access the tunnel: - - ```bash - ngrok http 80 --auth "username:password" - ``` - - Replace `username` and `password` with your desired credentials. - -2. Alternatively, you can **restrict access to specific IP addresses**. - - Go to the **ngrok dashboard** - - Use the **IP restrictions feature** in the dashboard. - - -## 5. Monitor Traffic (Optional) - -Ngrok provides a web interface for inspecting traffic and requests. To access it: - -1. Open your browser and go to `http://localhost:4040`. -2. Here, you can view **detailed logs** of incoming requests and responses. - - -## What's Next? - -Now that your teleoperation app is accessible remotely, you can: - -- Share the ngrok URL with collaborators for real-time teleoperation. -- Record datasets remotely and upload them to your Hugging Face account. -- Train AI models using the data collected from remote sessions. - - - Learn how to use your recorded datasets to train your first AI model. - \ No newline at end of file diff --git a/mintlify/examples/teleop.mdx b/mintlify/examples/teleop.mdx deleted file mode 100644 index 0c17da7..0000000 --- a/mintlify/examples/teleop.mdx +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: "VR Control with a Meta Quest" -description: "How to control your robot arm using a Meta Quest 2, Pro, 3, 3s over a local WiFi network" ---- - -import InstallCode from '/snippets/install-code.mdx'; -import GetMQApp from '/snippets/get-mq-app.mdx'; -import TeleopInstructions from '/snippets/teleop-instructions.mdx'; - - - -Control your robot arm using a Meta Quest 2, Pro, 3, or 3s. VR control makes bimanual control more intuitive and lets you collect data faster. - - - - - - - -## Prerequisites - -1. You need a robot arm such as the SO-100, the SO-101, or [other compatible hardware](https://github.com/phospho-app/phosphobot). Get the [phosphot starter pack here](https://robots.phospho.ai). -2. Install [the phosphobot software](/installation) on your computer. - - - -3. Connect robots to your computer. Start the phosphobot server. - - ```bash - phosphobot run - ``` - -4. Complete the [quickstart](/so-100/quickstart) and check that you can [control your robot](/basic-usage/teleop). -5. The **[phosphobot teleoperation app](/examples/teleop)** is installed on your **Meta Quest 2, Pro, 3 or 3s**. > - - - -## How to control your robot arm with the Meta Quest app? Step by step instructions - - - -## Examples of VR control - - - -The phospho Meta Quest app lets you operate the robot arm in real time. With the built-in stereo camera system, you can see the robot's environment in 3D, allowing you to interact as if you were physically present. - - - - - - - -## What's next? - -Use your **recorded datasets** to **train AI models**. - - - - - - Follow this guide to teleoperate the robot arm and train your first AI model. - - - - - - - diff --git a/mintlify/examples/vision.mdx b/mintlify/examples/vision.mdx deleted file mode 100644 index 64e6c4d..0000000 --- a/mintlify/examples/vision.mdx +++ /dev/null @@ -1,464 +0,0 @@ ---- -title: "Computer vision" -description: "How to leverage vision models for robotics." ---- - -The dev kit comes with a stereoscopic camera, ideal for 3D vision and AI workflows. -All code examples can be found in our open source repo [here](https://github.com/phospho-app/phosphobot). - -## Wave back - -Using OpenCV we can detect a wave gesture and make the robot wave back in just a couple lines of code. - - - -# - - - -```python wave.py -import cv2 -import sys -import time -import signal -import requests -import mediapipe as mp # type: ignore - -# Configurations -PI_IP: str = "127.0.0.1" -PI_PORT: int = 8080 - -# Initialize MediaPipe Hand tracking -mp_hands = mp.solutions.hands -hands = mp_hands.Hands( - static_image_mode=False, max_num_hands=1, min_detection_confidence=0.7 -) - - -# Handle Ctrl+C to exit the program gracefully -def signal_handler(sig, frame): - print("\nExiting gracefully...") - cap.release() - cv2.destroyAllWindows() - hands.close() - sys.exit(0) - - -signal.signal(signal.SIGINT, signal_handler) - - -# Function to call the API -def call_to_api(endpoint: str, data: dict = {}): - response = requests.post(f"http://{PI_IP}:{PI_PORT}/move/{endpoint}", json=data) - return response.json() - - -def wave_motion(): - points = 5 - for _ in range(2): - for i in range(points): - call_to_api( - "absolute", - { - "x": 0, - "y": 2 * (-1) ** i, - "z": 0, - "rx": -90, - "ry": 0, - "rz": 0, - "open": i % 2 == 0, - }, - ) - time.sleep(0.2) - - -call_to_api("init") -cap = cv2.VideoCapture(0) -last_wave_time: float = 0 -WAVE_COOLDOWN: float = 3 - -try: - while True: - success, image = cap.read() - if not success: - continue - - results = hands.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) - - current_time = time.time() - if ( - results.multi_hand_landmarks - and current_time - last_wave_time > WAVE_COOLDOWN - ): - wave_motion() - last_wave_time = current_time - - cv2.imshow("Hand Detection", image) - cv2.waitKey(1) - -except KeyboardInterrupt: - print("\nExiting gracefully...") -finally: - cap.release() - cv2.destroyAllWindows() - hands.close() -``` - - - -## Hand tracking - -This is a simple implementation of hand tracking using the MediaPipe library. The robot moves based on the hand position and closure. - - - -# - - - -```python hand_tracking.py -import sys -import cv2 -import time -import signal -import requests -import numpy as np -import mediapipe as mp # type: ignore - -# Configurations -PI_IP: str = "127.0.0.1" -PI_PORT: int = 8080 - -# Initialize MediaPipe Hand tracking -mp_hands = mp.solutions.hands -hands = mp_hands.Hands( - static_image_mode=False, - max_num_hands=1, - min_detection_confidence=0.7, - min_tracking_confidence=0.7, -) -mp_draw = mp.solutions.drawing_utils - - -# Handle Ctrl+C to exit the program gracefully -def signal_handler(sig, frame): - print("\nExiting gracefully...") - cap.release() - cv2.destroyAllWindows() - hands.close() - sys.exit(0) - - -signal.signal(signal.SIGINT, signal_handler) - - -# Function to call the API -def call_to_api(endpoint: str, data: dict = {}): - response = requests.post(f"http://{PI_IP}:{PI_PORT}/move/{endpoint}", json=data) - return response.json() - - -def calculate_hand_closure(hand_landmarks): - """ - Calculate if the hand is closed based on thumb and index finger distance - Returns a value between 0 (open) and 1 (closed) - """ - thumb_tip = hand_landmarks.landmark[4] - index_tip = hand_landmarks.landmark[8] - - distance = np.sqrt( - (thumb_tip.x - index_tip.x) ** 2 - + (thumb_tip.y - index_tip.y) ** 2 - + (thumb_tip.z - index_tip.z) ** 2 - ) - - # Normalize distance (these values might need adjustment based on the hand size) - normalized = np.clip(1.0 - (distance * 5), 0, 1) - return normalized - - -# 1 - Initialize the robot -call_to_api("init") -print("Initializing robot") -time.sleep(2) - -# Initialize webcam -cap = cv2.VideoCapture(0) - -# Get camera frame dimensions -frame_width: float = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) -frame_height: float = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - -# Define the workspace boundaries (in meters) -WORKSPACE_Y = 0.4 -WORKSPACE_Z = 0.2 - - -def map_to_robot_coordinates(hand_x, hand_y): - """ - Map normalized hand coordinates to robot workspace coordinates - We match the hand x coordinate to the robot y coordinate - And the hand y coordinate to the robot z coordinate - """ - robot_y = ((0.5 - hand_x) * 2) * (WORKSPACE_Y / 2) * 100 - robot_z = ((0.5 - hand_y) * 2) * (WORKSPACE_Z / 2) * 100 - return robot_y, robot_z - - -# Previous position for smoothing, this helps make the robot movements less jerky -prev_pos = {"y": 0, "z": 0} -smoothing_factor = 0.5 - -try: - while cap.isOpened(): - success, image = cap.read() - if not success: - print("Failed to capture frame") - continue - - image = cv2.flip(image, 1) # The front camera is inverted - rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # Convert to RGB image - results = hands.process(rgb_image) - - if results.multi_hand_landmarks: - for hand_landmarks in results.multi_hand_landmarks: - mp_draw.draw_landmarks(image, hand_landmarks, mp_hands.HAND_CONNECTIONS) - - palm = hand_landmarks.landmark[0] - hand_closed = calculate_hand_closure(hand_landmarks) - - robot_y, robot_z = map_to_robot_coordinates(palm.x, palm.y) - - robot_y = prev_pos["y"] * smoothing_factor + robot_y * ( - 1 - smoothing_factor - ) - robot_z = prev_pos["z"] * smoothing_factor + robot_z * ( - 1 - smoothing_factor - ) - - prev_pos = {"y": robot_y, "z": robot_z} - - call_to_api( - "absolute", - { - "x": -5, - "y": robot_y, - "z": robot_z, - "rx": 0, - "ry": 0, - "rz": 0, - "open": 1 - hand_closed, - }, - ) - - cv2.putText( - image, - f"Position: (y:{robot_y:.3f}, z:{robot_z:.3f})", - (10, 30), - cv2.FONT_HERSHEY_SIMPLEX, - 1, - (0, 255, 0), - 2, - ) - cv2.putText( - image, - f"Grip: {'Closed' if hand_closed > 0.5 else 'Open'}", - (10, 70), - cv2.FONT_HERSHEY_SIMPLEX, - 1, - (0, 255, 0), - 2, - ) - - cv2.imshow("Hand Tracking", image) - cv2.waitKey(1) - -except KeyboardInterrupt: - print("\nExiting gracefully...") -finally: - cap.release() - cv2.destroyAllWindows() - hands.close() -``` - - - -## Rock paper scissors - -Create fun interactions with your robot with this simple implementation of a Rock Paper Scissors game. - - - -# - - - -```python rock_paper_scissors.py -import cv2 -import time -import random -import requests -import numpy as np -import mediapipe as mp # type: ignore - -# Robot API Configuration -PI_IP = "127.0.0.1" -PI_PORT = 8080 - - -class RockPaperScissorsGame: - def __init__(self): - self.mp_hands = mp.solutions.hands - self.hands = self.mp_hands.Hands( - static_image_mode=False, - max_num_hands=1, - min_detection_confidence=0.7, - min_tracking_confidence=0.7, - ) - self.cap = cv2.VideoCapture(0) - self.gestures = { - "rock": self.make_rock_gesture, - "paper": self.make_paper_gesture, - "scissors": self.make_scissors_gesture, - } - - def call_to_api(self, endpoint: str, data: dict = {}): - response = requests.post(f"http://{PI_IP}:{PI_PORT}/move/{endpoint}", json=data) - return response.json() - - def detect_gesture(self, hand_landmarks): - # Get relevant finger landmarks - thumb_tip = hand_landmarks.landmark[4] - index_tip = hand_landmarks.landmark[8] - middle_tip = hand_landmarks.landmark[12] - ring_tip = hand_landmarks.landmark[16] - pinky_tip = hand_landmarks.landmark[20] - - # Get wrist position for reference - wrist = hand_landmarks.landmark[0] - - # Calculate distances from wrist - fingers_extended = [] - for tip in [thumb_tip, index_tip, middle_tip, ring_tip, pinky_tip]: - distance = np.sqrt((tip.x - wrist.x) ** 2 + (tip.y - wrist.y) ** 2) - fingers_extended.append(distance > 0.2) # Threshold for extended fingers - - # Determine gesture - if not any(fingers_extended[1:]): # All fingers closed - return "rock" - elif all(fingers_extended): # All fingers open - return "paper" - elif ( - fingers_extended[1] - and fingers_extended[2] - and not fingers_extended[3] - and not fingers_extended[4] - ): # Only index and middle extended - return "scissors" - return None - - def make_rock_gesture(self): - # Move to closed fist position - self.call_to_api( - "absolute", - {"x": 0, "y": 0, "z": 5, "rx": 0, "ry": 0, "rz": 0, "open": 0}, - ) - - def make_paper_gesture(self): - # Move to open hand position - self.call_to_api( - "absolute", - {"x": 0, "y": 0, "z": 5, "rx": 0, "ry": 0, "rz": 0, "open": 1}, - ) - - def make_scissors_gesture(self): - # Move to scissors position - self.call_to_api( - "absolute", - {"x": 0, "y": 0, "z": 5, "rx": 0, "ry": -45, "rz": 0, "open": 0.5}, - ) - - def move_up_down(self, times=3): - for step in range(times + 1): - self.call_to_api( - "absolute", - {"x": 0, "y": 0, "z": 4, "rx": 0, "ry": 0, "rz": 0, "open": 0}, - ) - time.sleep(0.25) - self.call_to_api( - "absolute", - {"x": 0, "y": 0, "z": -4, "rx": 0, "ry": 0, "rz": 0, "open": 0}, - ) - time.sleep(0.25) - print(times - step) - - def determine_winner(self, player_gesture, robot_gesture): - if player_gesture == robot_gesture: - return "Tie!" - winners = {"rock": "scissors", "paper": "rock", "scissors": "paper"} - return ( - "Player wins!" - if winners[player_gesture] == robot_gesture - else "Robot wins!" - ) - - def play_game(self): - print("Initializing robot...") - self.call_to_api("init") - time.sleep(1) - - print("Robot performing countdown...") - self.move_up_down(times=3) - - ret, frame = self.cap.read() - if not ret: - print("Failed to capture image.") - return - - rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - results = self.hands.process(rgb_frame) - - if results.multi_hand_landmarks: - player_gesture = self.detect_gesture(results.multi_hand_landmarks[0]) - - if player_gesture: - robot_gesture = random.choice(["rock", "paper", "scissors"]) - print(f"\nPlayer chose: {player_gesture}") - print(f"Robot chose: {robot_gesture}") - - self.gestures[robot_gesture]() # Robot makes its gesture - result = self.determine_winner(player_gesture, robot_gesture) - print(result) - time.sleep(2) - else: - print("Gesture not detected. Please try again.") - else: - print("No hand detected. Please try again.") - - self.cap.release() - cv2.destroyAllWindows() - - -if __name__ == "__main__": - game = RockPaperScissorsGame() - game.play_game() -``` - - \ No newline at end of file diff --git a/mintlify/faq.mdx b/mintlify/faq.mdx deleted file mode 100644 index d73e056..0000000 --- a/mintlify/faq.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: "Frequently Asked Questions" -description: "Answers to common questions about phosphobot" ---- - -## Does phospho collect telemetry? - -Phospho uses telemetry to report anonymous usage and error information. As an open-source software, this type of information is important to improve and understand how the product is used. This is done using [PostHog](https://github.com/PostHog/posthog) and [Sentry](https://sentry.io/). - -## Why do we collect telemetry? - -Anonymous telemetry information enables us to continuously improve the product and detect recurring problems to better serve all users. We collect aggregated information about general usage and errors. We do NOT collect any information on users' data records, datasets, or metadata information. None of the data is shared with third parties. We want to be super transparent about this and you can find the exact data we collect by inspecting the code in the repo. - -## How to disable telemetry? - -You can opt-out by adding the flag `--no-telemetry` to the `uv run phosphobot run` command in the `Makefile`. diff --git a/mintlify/installation.mdx b/mintlify/installation.mdx deleted file mode 100644 index 29103cc..0000000 --- a/mintlify/installation.mdx +++ /dev/null @@ -1,264 +0,0 @@ ---- -title: "Install phosphobot" -description: "Control your robot arm in seconds with phosphobot on any computer" ---- - -import InstallCode from '/snippets/install-code.mdx'; - - - - -## Install phosphobot - -The quickest way to get started is to use the install script and a compiled version. In a terminal, run the following command: - - - - -Then, run the phosphobot server: - -```bash -phosphobot run -``` - -It can take up to 15 seconds for the server to start. - -On the same computer, open a web browser and navigate to `localhost`. You should now see the **phosphobot dashboard**. - -![phosphobot dashboard](/assets/phosphobot-dashboard.png) - - - - Discover how to control your robot arm from the dashboard and from the Meta Quest app. - - -## Using uv (pip) - -phosphobot is also available as a [Python package.](https://pypi.org/project/phosphobot/) We recommend using [uv](https://docs.astral.sh/uv/getting-started/installation/) to install it. - -1. Install uv by running the following command in your terminal: - - - -```bash macOS and Linux -curl -LsSf https://astral.sh/uv/install.sh | sh -``` - -```powershell Windows -powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" -``` - - - -2. Run phosphobot with uv by running the following command in your terminal: - -```bash -uvx phosphobot@latest run -``` - -Using the `@latest` tag ensures you always run the latest version of phosphobot. - -If you want to run a specific version, [you can specify it:](https://docs.astral.sh/uv/guides/tools/#requesting-specific-versions) - -```bash -uvx phosphobot>=0.30 run -``` - -Check the [uv doc](https://docs.astral.sh/uv/guides/tools/#upgrading-tools) on how to manually handle updates. - - -### Troubleshooting - -Depending on your system, you may encouter errors related to **pybullet** compilation. If so, please refer [to this guide](https://github.com/phospho-app/phosphobot/tree/main/phosphobot#troubleshooting-pybullet-wont-build-on-windows) that list some common issues and how to fix them.. - -## MacOS (brew) - -On MacOS, use [Homebrew](https://brew.sh/) to install phosphobot: - -```bash -brew tap phospho-app/phosphobot -brew install phosphobot -``` -To **update phosphobot**, run the following command: -```bash -brew update && brew upgrade phosphobot -``` - -You can also [directly download the binaries here.](https://github.com/phospho-app/homebrew-phosphobot/releases/latest) - - -## Linux (apt) - -On Linux, we use apt to distribute phosphobot and you need additional packages. That's why you need to run the install script with sudo: - -```bash -curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | sudo bash -``` - -To **update phosphobot**, run the following command: - -```bash -sudo apt update && sudo apt install --only-upgrade phosphobot -``` - -## Raspberry Pi (apt) - -phosphobot on Raspberry Pi has advanced features to make it easier to use (led ticking, bluetooth connection, WiFi hotspot, extra dependencies already installed). - - - -*If you already have a setup Raspberry Pi, you can skip this part* - - -Download and install the [Raspberry Pi imager](https://www.raspberrypi.com/software/) on your main computer. - -Insert an **SD card** into your main computer. - -Run the Raspberry Pi imager. - -- Select your Raspberry Pi model (the version in the dev kit is Raspberry Pi 5). -- Select the Raspberry Pi OS (64 bit)? -- Select your SD card? - -Then, click **Next** on the bottom right. - -![Raspberry Pi Imager](/assets/Rpi-imager.jpg) - -When the window pops up asking for OS customisation, select **Edit Settings**. - -![Raspberry Pi OS Customisation](/assets/Rpi-OS-customisation.jpg) - -In general settings : - -- Set hostname to `phosphobot`? -- Set username and password to `phosphobot` and `password123` (example). -- Configure wireless LAN: add your Wifi network name, password, and country. Be careful: tHiS iS cAsE sEnSiTiVe! - -![Raspberry Pi General Settings](/assets/Rpi-general-settings.jpg) - -In the Services settings, make sure SSH is enabled. - -![Raspberry Pi SSH enabled](/assets/Rpi-SSH-enabled.jpg) - -Then hit Save and hit Yes to use the OS customization settings. Write the image to the SD card. - -When writing is finished, take out the SD card from your computer. - -Insert the microSD into the Raspberry Pi when it's turned off. - - - - -**1. SSH into the Raspberry Pi** - -SSH (Secure Shell) allows you to access the Raspberry Pi remotely from your main computer through the terminal. - -Turn on the Raspberry Pi by plugging it to a power source. Wait for it to boot (look for the LED pattern) - -Then, open a terminal and SSH into your Raspberry Pi. If you've followed the previous part, this means running on your computer this command : - -```bash -ssh phosphobot@phosphobot.local -``` - -*If you didn't follow the previous part, the username@hostname.local and password may be different.* - -When asked about the authenticity of host phosphobot.local, type `yes` to accept. - -And when asked for password, enter `password123` . - -**2. Install the software** - -When you've successfully SSH in the Raspberry Pi, run this to install the teleoperation server : - -```bash -curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | sudo bash -``` - -**Your Raspberry Pi should be connected to the internet.** - -- When prompted `btberrywifi install location`, **leave blank**. -- When prompted `bluetooth password`, **leave blank** -- When prompted `Enter your country code`, specify your country code (eg: US, FRโ€ฆ) - -After that, the installation script will download install dependencies. It can take a few minutes depending on your connection speed. - - -**3. Once the installation is complete, reboot your Raspberry Pi.** -```bash -sudo reboot -``` - -Every time the control module is powered on, it will check for updates and install them automatically. They will be available the next time you power it on. - - -## Windows - -### Windows (Install script) - -1. To **install** or **update** phosphobot, run the following command in PowerShell: -```powershell -powershell -ExecutionPolicy Bypass -Command "irm https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.ps1 | iex" -``` - -2. To start the phosphobot server, run the following command in PowerShell. - -```powershell -phosphobot run -``` -3. When the "Windows protected your PC" warning appears, click **"More info"**, then click **"Run anyway"**. - - -The warning only appears on first run. Subsequent launches work normally. - - -### Windows (manual) - -You can also [directly download the binaries here.](https://github.com/phospho-app/homebrew-phosphobot/releases/latest). After downloading, run phosphobot this way: - -```powershell -C:\path\to\phosphobot.exe run -``` - -Change `C:\path\to\` to the path where you downloaded phosphobot. To make this easier, we recommand you to rename the file to `phosphobot.exe` and move it to your Desktop. - -Then, you can **create a shortcut** to launch `C:\Users\\Desktop\phosphobot.exe run` from anywhere. - - -### Windows (WSL) - -You can use the Windows Subsystem for Linux (WSL) to run phosphobot. - -1. Please refer to [this guide](https://learn.microsoft.com/en-us/windows/wsl/install) to install WSL on your Windows machine. -2. Use [usbipd](https://github.com/dorssel/usbipd-win) to pass the robot arms and the cameras to WSL. -3. Carry on with the [Linux installation instructions](#linux-apt). - -## Install from source - -phosphobot is open source and you can install it from source to contribute to the project or fix compatibility issues. - -You can find the instructions in the [GitHub repository right here](https://github.com/phospho-app/phosphobot/tree/main/phosphobot) - -## Support - - - Join the Discord to ask questions, get help from others and get updates - diff --git a/mintlify/learn/ai-models.mdx b/mintlify/learn/ai-models.mdx deleted file mode 100644 index e68d1d1..0000000 --- a/mintlify/learn/ai-models.mdx +++ /dev/null @@ -1,339 +0,0 @@ ---- -title: "Train robotics AI models" -description: "A guide to training AI models that control robots" ---- - -import InstallCode from "/snippets/install-code.mdx"; -import GetMQApp from "/snippets/get-mq-app.mdx"; -import TeleopInstructions from "/snippets/teleop-instructions.mdx"; - -The phospho starter pack makes it easy to train robotics AI models by integrating with **LeRobot** from Hugging Face. - -In this guide, we'll show you how to train the ACT (Action Chunking Transformer) model using the phospho starter pack and LeRobot by Hugging Face. - -## What is LeRobot? - -![LeRobot logo](https://cdn-uploads.huggingface.co/production/uploads/631ce4b244503b72277fc89f/MNkMdnJqyPvOAEg20Mafg.png) - -LeRobot is a platform designed to make real-world robotics more accessible for everyone. It provides pre-trained models, datasets, and tools in PyTorch. - -It focuses on state-of-the-art approaches in **imitation learning** and **reinforcement learning**. - -With LeRobot, you get access to: - -- Pretrained models for robotics applications -- Human-collected demonstration datasets -- Simulated environments to test and refine AI models - -Useful links: - -- [LeRobot on GitHub](https://github.com/huggingface/lerobot) -- [LeRobot on Hugging Face](https://huggingface.co/lerobot) -- [AI models for robotics](https://huggingface.co/models?pipeline_tag=robotics&sort=trending) - -## Step by step guide - -In this guide, we will use the phospho starter pack to record a dataset and upload it to Hugging Face. - - - -## Prerequisites - -1. You need an assembled **SO-100** robot arm and **cameras**. Get the [phosphot starter pack here](https://robots.phospho.ai). -2. Install [the phosphobot software](/installation) - - - -3. **Connect your cameras to the computer.** Start the phosphobot server. - -```bash -phosphobot run -``` - -4. Complete the [quickstart](/so-100/quickstart) and check that you can [control your robot](/basic-usage/teleop). -5. You have the **[phosphobot teleoperation app](/examples/teleop)** is installed on your **Meta Quest 2, Pro, 3 or 3s** - - - -6. You have a **device to train your model**. We recommend using a **GPU** for faster training. - -## 1. Set up your Hugging Face token - -To sync datasets, you need a Hugging Face token with write access. Follow these steps to generate one: - -1. Log in to your Hugging Face account. You can create [one here for free](https://huggingface.co) -2. Go to **Profile** and click **Access Tokens** in the sidebar. -3. Select the **Write** option to grant write access to your account. This is necessary for creating new datasets and uploading files. Name your token and click **Create token**. - -4. **Copy the token** and **save it** in a secure place. You will need it later. - -5. Make sure the phosphobot server is running. Open a browser and access `localhost` or `phosphobot.local` if you're using the control module. Then go to the Admin Configuration. - -6. **Paste the Hugging Face token**, and **save it**. - -![Paste your huggingface token here](/assets/admin-settings-huggingface.png) - -## 2. Set your dataset name and parameters - -Go to the _Admin Configuration_ page of your phospshobot dashboard. You can adjust settings. The most important are: - -- **Dataset Name**: The name of the dataset you want to record. -- **Task**: A text description of the task you're about to record. For example: _"Pick up the lego brick and put it in the box"_. This helps you remember what you recorded and is used by some AI models to understand the task. -- **Camera**: The cameras you want to record. By default, all cameras are recorded. You can select the cameras to record in the Admin Configuration. -- **Video Codec**: The video codec used to record the videos. The default is `AVC1`, which is the most efficient codec. If you're having compatibility issues due to unavailable codecs (eg on Linux), switch to `mp4v` which is more compatible. - -## 3. Control the robot in the Meta Quest app - -The easiest way to record a dataset is to use the Meta Quest app. - - - - - - - -Go to the **Dataset tab** in the phosphobot dashboard to see the recorded dataset. Use the button Preview to preview them using [LeRobot Dataset Visualizer](https://huggingface.co/spaces/lerobot/visualize_dataset). - -![LeRobot dataset visualizer](/assets/lerobot_dataset_viz.png) - - - The dataset visualizer only works with `AVC1` video codec. If you used another - codec, you may see black screens in the video preview. Preview directly the - videos files in a video player by opening your recording locally: - `~/phosphobot/recordings/`. - - -## 4. Train your first model - -### Train GR00T-N1-2B, Pi0.5, ACT, BB_ACT in one click with phosphobot cloud - -To train a model, you can use the phosphobot cloud. This is the quickest way to train a model. - -1. Enter the name of your dataset on Hugging Face (example: `PLB/simple-lego-pickup-mono-2`) in the **AI Training and Control** section. -2. Select the parameters you want to change or leave the default ones. -3. Click on **Train AI Model**. Your model starts training. Training can take up to 3 hours. Follow the training using the button **View trained models**. Your model is uploaded to HuggingFace [on the phospho-app account](https://huggingface.co/phospho-app). -4. To control your robot with the trained model, go to the **Control your robot** section and enter the name of your model. - -![phosphobot training cloud](/assets/phosphobot-aitraining.png) - - - - - - Learn how to train a model with phosphobot cloud - - - - Learn about controlling your robot with GR00T-N1-2B and phosphobot cloud - - - -### Train an ACT model locally with LeRobot - - - You need a GPU with at least 16GB of memory to train the model. - - -This guide will show you how to train the ACT model locally using **LeRobot** for your SO-100 robot. - -1. Install [uv](https://docs.astral.sh/uv/), the modern Python package manager. - -```bash -# On macOS and Linux -curl -LsSf https://astral.sh/uv/install.sh | sh -``` - -2. Set up training environment. - -```bash -mkdir my_model -cd my_model -uv init -uv add phosphobot git+https://github.com/phospho-app/lerobot -git clone https://github.com/phospho-app/lerobot -``` - -3. (MacOS only) Set environment variables for torch compatibility: - -```bash -export DYLD_LIBRARY_PATH="/opt/homebrew/lib:/usr/local/lib:$DYLD_LIBRARY_PATH" -``` - -4. (Optional) Add the [Weight & Biases](https://wandb.ai) integration for training metrics tracking: - -```bash -wandb login -``` - -5. Run training script - Adjust parameters based on your hardware: - -```bash -uv run lerobot/lerobot/scripts/train.py \ - --dataset.repo_id=LegrandFrederic/Orange-brick-in-black-box \ # Replace with / - --policy.type=act \ # Choose from act, diffusion, tdmpc, or vqbet - --output_dir=outputs/train/phoshobot_test \ - --job_name=phosphobot_test \ - --policy.device=mps \ # Use 'cuda' for NVIDIA GPUs or 'cpu' if no GPU - --wandb.enable=true # Optional -``` - -Trained models will be saved in `lerobot/outputs/train/`. - -6. (Optional) Upload the model to Hugging Face. Login to HuggingFace CLI: - -```bash -huggingface-cli login -# Enter your write token from https://huggingface.co/settings/tokens -``` - -HuggingFace model hub is a wrapper of [Github LFS](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage). Push the model to Hugging Face: - -```bash -# From your training output directory -cd lerobot/outputs/train/phosphobot_test - -# Initialize and push to Hub (replace and ) -huggingface-cli repo-create / --type model -git lfs install -git add . -git commit -m "Add trained ACT model" -git push -``` - -## 5. Control your robot with the ACT model - -1. Launch ACT inference server (Run on GPU machine): - -```bash -# Download inference server script -curl -o server.py https://raw.githubusercontent.com/phospho-app/phosphobot/main/inference/ACT/server.py -``` - -```bash -# Start server -uv run server.py --model_id LegrandFrederic/Orange-brick-in-black-box #ย Replace with -``` - -2. Make sure the phosphobot server is running to control your robot: - -```bash -# Install it this way -curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | bash -# Start it this way -phosphobot run -``` - -3. Create inference client script (Copy the content into `my_model/client.py`): - -```python -# /// script -# requires-python = ">=3.10" -# dependencies = [ -# "phosphobot", -# ] -# -# /// -# /// script -# requires-python = ">=3.10" -# dependencies = [ -# "phosphobot", -# ] -# -# /// - -from phosphobot.camera import AllCameras -import httpx -from phosphobot.am import ACT -import time -import numpy as np - -# Initialize hardware interfaces -PHOSPHOBOT_API_URL = "http://localhost:80" -allcameras = AllCameras() -time.sleep(1) # Camera warmup - -# Connect to ACT server -model = ACT() - -while True: - # Capture multi-camera frames (adjust camera IDs and size as needed) - images = [allcameras.get_rgb_frame(0, resize=(240, 320))] - - # Get current robot state - state = httpx.post(f"{PHOSPHOBOT_API_URL}/joints/read").json() - - # Generate actions - actions = model( - {"state": np.array(state["angles_rad"]), "images": np.array(images)} - ) - - # Execute actions at 30Hz - for action in actions: - httpx.post( - f"{PHOSPHOBOT_API_URL}/joints/write", json={"angles": action.tolist()} - ) - time.sleep(1 / 30) -``` - -4. Run the inference script: - -```bash -uv run client.py -``` - -Stop the script by pressing `Ctrl + C`. - -## What's next? - -Next, you can use the trained model to control your robot. Head to our [guide](/basic-usage/inference) to get started! - - - - Learn more about Robotics AI models - - - Join the Discord to ask questions, get help from others and get updates (we - ship almost daily) - - diff --git a/mintlify/learn/cameras.mdx b/mintlify/learn/cameras.mdx deleted file mode 100644 index b60157c..0000000 --- a/mintlify/learn/cameras.mdx +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: "Cameras" -description: "What are the different cameras used for robotics ?" ---- - -## Cameras in Robotics - -Cameras are the eyes of your robot. - -Robots may rely on one or more camera to make decisions. The images, along with additional information such as joint positions, are fed into the model during both training and inference. - -There exist different types of cameras and phosphobot supports most of them. - -### How to connect your camera to phosphobot? - -phosphobot uses the powerful [OpenCV](https://opencv.org/) library to detect cameras automatically. This open source library supports most of the generic cameras available. phosphobot ships with its own binary of OpenCV, so you don't need to install it separately. - -Placement of cameras matter. In robotics datasets, there are two main types of camera setups: - -1. **Context cameras**: These cameras are placed to capture the environment and objects around the robot. These context cameras can be placed on the robot (e.g. on the head, on the body) or in the environment (e.g. on the walls, on the ceiling). - -2. **Wrist cameras**: These cameras are placed on the wrists (hands) of the robot. They help with fine-grained manipulation tasks, so that the robot can see if it's holding an object correctly. The [phosphobot starter pack](https://robots.phospho.ai) comes with two wrist cameras, one for each arm. - - -![phospho starter pack wrist camera](/assets/wrist-camera.jpg) - -Adding more cameras usually helps to improve AI accuracy. However, it requires more compute at inference time (slower models) and makes the real-life setup more cumbersome. Usually, one context camera and two wrist cameras are a good trade-off. - -### What are stereo cameras? - -Stereo Cameras are made of two lenses that capture two images of the same scene from slightly different angles. -![stereo_cam](/assets/stereo_cam.png) - -The shift between the two images is used to calculate the depth of the scene. The greater the shift, the closer the object is to the camera. - -![Example of a stereo cam depth reconstructions](/assets/stereo_cam_example.jpg) - - -Learn how to compute a depth map from stereo images - - -In deep learning models, however, you usually feed directly the two images to the model. The model learns to extract the depth information by itself. - - -### What are depth cameras? (Realsense Cameras) - -Depth cameras are a type of camera that can directly return a depth map in addition to the color image. A depth map is a 2D image where each pixel represents the distance between the camera and the object in the scene. - -This is an example of a depth image: - -![The depth image of an Intel Realsense camera](/assets/depth_image.png) - -Intel RealSense cameras are a popular choice for depth cameras. They are more complex than standard cameras. They have multiple sensors (infrared, multiple color sensors, etc.) and a **processor** used to combine them to compute the depth map. - -This means they are pricier than standard cameras and tend to be more difficult to set up. - -![Depth camera, model Intel Realsense](/assets/depth_camera.png) - - -Learn more about Intel RealSense cameras - - -phosphobot software supports Intel RealSense cameras using the [Intel Realsense SDK 2.0](https://github.com/IntelRealSense/librealsense/tree/master). - -### What are more specific cameras used in robotics? - -More specific use cases require more specific cameras. For example: - -- **[Thermal cameras](https://en.wikipedia.org/wiki/Thermography)** are used to detect heat. They are useful to detect living beings in the dark or to detect overheating components. -- **[Night vision cameras](https://en.wikipedia.org/wiki/Night-vision_device)** are used to capture images in the dark. They are useful for surveillance or for night-time navigation. -- **[Lidar cameras](https://en.wikipedia.org/wiki/Lidar)** are used to capture 3D point clouds of the environment. They are useful for autonomous vehicles or for mapping tasks. -- **[360 cameras](https://en.wikipedia.org/wiki/Omnidirectional_(360-degree)_camera)** are used to capture a full 360ยฐ view of the environment. They are useful for navigation tasks or for telepresence robots. -- **[Multi-spectral cameras](https://en.wikipedia.org/wiki/Multispectral_imaging)** are used to capture images in multiple wavelengths. They are useful for agriculture or for medical imaging. - -The eyes you give to your robot will depend on the tasks you want it to perform. - -## Dataset Recording - -When [recording a dataset](/basic-usage/dataset-recording) with phosphobot, the images are saved in a mp4 video file using OpenCV. The number of FPS (frames per second), the mp4 codec as well as the video resolution can be configured for recording in the Admin Configuration. - -By default, all available cameras are recorded. But you can disable some of them in the Admin Configuration. This is helpful if you don't want to record your laptop camera or if Apple's iPhone camera records inside your pocket. - -![Recording parameters](/assets/recording_parameters.png) - -Anybody can contribute to add new types of cameras to the phospho starter packs by creating a new camera class inheriting from `BaseCamera`. - - -Join the community and add support for more cameras for the phospho starter packs. - - -## Troubleshooting Cameras - -Every camera vendor has its own SDK and drivers. This can lead to compatibility issues. Here are some tips: - -1. On MacOS, you may have the error `failed to set power state` when connecting a Realsense camer. [This is a known issue](https://github.com/phospho-app/phosphobot/issues/171). The solution is to run `phosphobot` with `sudo` to avoid permission issues. - -``` -sudo phosphobot run -``` - -2. On Nvidia Jetpack 6 (Nvidia jetson), [pyrealsense2 doesn't work out of the box.](https://github.com/phospho-app/phosphobot/issues/206). You need to recompile the pyrealsense2 library from source. - -3. Use **virtual cameras** to avoid compatibility issues. Virtual cameras let you record your computer screen or a specific window. This is helpful as a workaround when your camera is not detected by phosphobot. - - -Check OBS Studio guide to create virtual cameras - - -4. Ask for help on the [phospho Discord](https://docs.phospho.ai/welcome), along with your camera model, your operating system (MacOS, Linux, Windows...) and what you have tried so far. We'll get this working together! \ No newline at end of file diff --git a/mintlify/learn/gravity-compensation.mdx b/mintlify/learn/gravity-compensation.mdx deleted file mode 100644 index ce43624..0000000 --- a/mintlify/learn/gravity-compensation.mdx +++ /dev/null @@ -1,176 +0,0 @@ ---- -title: "Gravity Compensation" -description: "Understanding Gravity Compensation in Robotics" ---- - - -## Introduction - -Gravity compensation is a fundamental technique in robotics that **allows a robot arm to be freely moved by hand** while maintaining its position when released. This page will explain how you can create a simple gravity compensation algorithm for the SO-100 robot arm. - -You can enable gravity compensation in phosphobot by going to **Leader arm control** and clicking on the **Enable gravity compensation** button. - - - - -### How Gravity Compensation Works - -At its core, **gravity compensation counteracts the effect of gravity on a robot's joints**. Without compensation, a robot arm would fall due to gravity when the motors are not actively holding position. _With proper compensation, you can move the robot by hand, and it will stay in place when released._ - - -Gravity compensation enables intuitive physical interaction with robots without requiring motor power to maintain position. - - -### The Physics Behind It - -The implementation uses the principle of inverse dynamics to calculate the torques needed to counteract gravity. Let's break down the key components: - - - -#### Inverse Dynamics - -Inverse dynamics calculates the joint torques ($$\tau$$) required to achieve a desired motion, given the current positions, velocities, and accelerations of the joints. In our case, we use it to find the gravity torques ($$\tau_g$$โ€‹). - -The equation for **inverse dynamics** can be expressed as: -$$ -\tau=M(q)\ddot{q} + C(q,\dot{q})\dot{q} + G(q)\tau -$$ - -Where: - -- **$$ M(q)$$** is the mass matrix - -- **$$C(q,\dot{q})$$** represents Coriolis and centrifugal forces - -- **$$G(q)$$** is the gravity vector - -- **$$q, \dot{q}, \ddot{q}$$** are joint positions, velocities, and accelerations respectively - - -We want to solve for a specific case where the robot is at rest. -When we set velocities and accelerations to zero, we get just the gravity term: -ฯ„g=G(q)\tau_g = G(q)ฯ„gโ€‹=G(q) - - - -#### Virtual Displacement Method - -Our robot is an SO-100 with STS3215 motors, which don't natively support torque control. Instead, they use position control. - -To achieve **gravity compensation**, we need to simulate the effect of torque control using position control. - -Instead of directly applying the calculated torques (which would require torque control), we create small virtual displacements in the direction that would counteract gravity. - -For each joint, we calculate a **desired position** that's slightly offset from the current position: - -$$ -ฮธ_{des}=ฮธ_{current}+\alpha \cdot \tau_g\theta_{des} -$$ - -Where: -- **$$ฮธ_{des}$$** is the desired joint position - -- **$$ฮธ_{current}$$** is the current joint position - -- **$$\alpha$$** is a scaling factor that determines how much the joint should move - -- **$$\tau_g$$** is the gravity torque for that joint - - - - -## Implementation Details - -Let's examine key aspects needed to implement gravity compensation for the SO-100 robot arm. - -### PID Gains Adjustment - -To make the robot arm more compliant during gravity compensation, we adjust the PID gains of the motors. The default gains are optimized for position control, but we need **different gains** for gravity compensation. - - -These will depend on the motors you have, 6V or 12V. -Play around with the values to find the best settings for your robot. - - -These are lower than the default gains, making the robot **more responsive to external forces** while still maintaining enough stiffness to hold position against gravity. - -### The Control Loop - -The main gravity compensation loop needs to run at a high frequency to provide smooth motion. You should aim for at least 50 Hz to 100 Hz. - -Here's a simplified version of the loop: - -1. Read **current** joint positions -2. **Update** the robot state in the physics simulator (Mujoco, Genesis, PyBullet, etc.) -3. **Calculate gravity torques** using inverse dynamics -4. Compute desired positions with the **virtual displacement formula** -5. Send the new positions to the motors -6. Repeat - - - - -**Physics Simulators:** -- Mujoco -- Genesis -- PyBullet - - - -Learn more about PyBullet for robot simulation - - - -### The Alpha Parameter - -The alpha parameter is crucial for tuning the gravity compensation: - -This array controls _how much each joint responds to the calculated gravity torques_: - -- **Higher** values make the joint more responsive but potentially less stable - -- **Lower** values make the joint more stable but potentially less responsive - -Zero values mean **no compensation** for that joint - -The values are tuned for each joint based on its mass properties and mechanical characteristics. - -### Mathematical Analysis - - - -The virtual displacement method can be understood as a **form of impedance control**. In traditional impedance control, the relationship between force and position is: - -$$ -F= K(x_{des} - x) + B\dot{x}F -$$ - -Where: -- **$$x_{des}$$** is the desired position -- **$$x$$** is the current position -- **$$K$$** is stiffness -- **$$B$$** is damping - -Our approach inverts this relationship: -$$ -x_{des}=x+K^{-1}Fx_{des} -$$ - -In our case: -- F is the **gravity force** -- $$K^{-1}$$ is represented by the $$\alpha$$ parameter. - - - - -Learn more about Inverse Dynamics in robotics - - -# Enjoy, and Happy Coding! \ No newline at end of file diff --git a/mintlify/learn/improve-robotics-ai-model.mdx b/mintlify/learn/improve-robotics-ai-model.mdx deleted file mode 100644 index 67159cd..0000000 --- a/mintlify/learn/improve-robotics-ai-model.mdx +++ /dev/null @@ -1,119 +0,0 @@ ---- -title: "How to train a good AI model" -description: "Best practices for recording a dataset for imitation learning in robotics" ---- - -import GetMQApp from '/snippets/get-mq-app.mdx'; - - -It can be frustrating when your AI robotics model doesn't perform as expected after hours of training. Imitation learning models are powerful mimics, but **they don't understand the intent** behind an action; **they only learn to replicate the patterns they see.** This means that if a model is failing, the root cause often isn't a bug in the model itself, but rather an issue in the data it was trained on or the way it was trained. - -The principle of **"garbage in, garbage out"** is especially true here. A model trained on ambiguous, inconsistent, or noisy data will produce ambiguous, inconsistent, or noisy behavior. This guide will walk you through the most critical areas to focus on, starting with the foundation of any good model: the dataset. - -We'll share with you the tips and tricks we've collected during our experiments and research to give you the best chance of success. - -## Improving Your Dataset Collection - -The quality and structure of your demonstration data will have the single biggest impact on your model's performance. Think of data collection as teaching by exampleโ€”the clearer your examples, the better your student will learn. - - - -### Control the Environment - -A consistent and controlled environment is essential for collecting reliable data. Your goal is to eliminate random variables that could confuse the model. Keep the robot's operating area free of unnecessary changes, like people walking by or other machinery moving in the background. Most importantly, ensure the lighting is even and stable across all recordings. Shadows and glare can obscure objects or change their appearance, so use diffused lamps or ring lights to maintain uniformity and avoid relying on natural light, which varies throughout the day. - -### Optimize Your Camera Setup - -Cameras are the eyes of your robot, and their configuration directly impacts what the model can "see" and learn. For best results, try to match the camera arrangement used to pre-train your foundation model. For instance, a model like [pi-zero by Physical Intelligence](https://physical-intelligence.github.io/pi-zero/) was trained with wrist cameras on each arm and a first-person view (FPV) camera. Positioning these cameras to clearly capture the robot's gripper, the target object, and the overall workspace is crucial. Before recording, ask yourself: โ€œCould I control the robot effectively using only these camera views?โ€ If the answer is no, your model will struggle too. - -### Demonstrate Clear Actions - -How the robot approaches and interacts with objects forms the core of the learned task. Plan the robot's movements so the target object is visible to the camerasโ€”especially the wrist camerasโ€”for as long as possible. Avoid having the gripper block the view of the object during the final approach. Instead, angle the arm to keep both the gripper and the target in sight. This clarity helps the model build a strong connection between its movements and the outcome. Strive for a consistent and repeatable strategy for each task, as this helps the model learn a reliable pattern of behavior. - -### Build a Diverse and Balanced Dataset - -A model that only sees one way to do a task will be brittle. A diverse dataset teaches the model to generalize across different but related scenarios. Introduce intentional variations by changing the starting position of the target object (e.g., left, right, center) or by using objects of different shapes, sizes, and colors. This defines the "learning space" where your model can operate successfully. - -However, it's important to balance diversity with consistency. Avoid recording "outlier" episodes that are radically different from the rest, as they can mislead the model and teach it incorrect or unsafe behaviors. For example, if you're traning a model to pick up a cup, don't include episodes where the robot **fails** to pick it up, or episodes where it **pushes** the cup instead of grasping it. These outliers can confuse the model and lead to poor performance. - -### Collect the Right Amount of Data - -While quality is key, quantity also matters. A good starting point for a single task is **40-50 high-quality episodes**. An "episode" is one complete execution of the task, from start to finish. - -For more complex tasks or when fine-tuning large models, **you may need more.** - -For example, when fine-tuning a model like **GR00T N1.5**, a common recommendation is to record longer episodes (30-40 seconds each) for a total of **20 to 30 minutes of recorded data**. You can see a great [reference dataset for a table cleanup task](https://huggingface.co/spaces/lerobot/visualize_dataset?path=%2Fyouliangtan%2Fso100-table-cleanup%2Fepisode_0) to understand the quality and structure to aim for. - -Recording data can get boring for humans. Try to make it fun using **VR control**, making breaks, and rewarding yourself for reaching milestones. Pick an exciting demo that you find meaningful and a demo that you'll enjoy sharing on social media. This will help you stay motivated and engaged throughout the process. - - - -## Final Sanity Checks Before Scaling - -Robotics datasets are time-consuming to create and difficult to edit. Before you invest heavily in collecting hundreds of episodes, it's wise to perform a few checks to ensure your time is well spent. - -First, record just a handful of episodes and use a tool like the [LeRobot Visualize Dataset space](https://huggingface.co/spaces/lerobot/visualize_dataset) to confirm the data was saved correctly and loads without errors. Then, run a full, small-scale cycle: collect a small dataset (e.g., 10 episodes), train a model for a few epochs, and test its ability to perform the task. Once a model is trained, a great test is to see if it can at least replay one of the training episodes perfectly. If this mini-pipeline works, you can scale up your data collection with confidence. - - -## Beyond the Dataset: Hyperparameter Tuning - -If your dataset is solid but the model still struggles, the issue may lie in the training configuration. Hyperparameters are the settings that control the learning process itself. While default values are often a good start, tuning them can lead to significant performance gains. - -Each model have different hyperparameters, but the idea is always the same: tinker with the settings to find the best configuration for your specific task. Here are some common hyperparameters to consider: - -### Learning Rate - -The learning rate determines how much the model adjusts its internal parameters after each batch of data. Think of it as the size of the steps it takes towards a solution. If the learning rate is too high, the model might "overshoot" the optimal solution and become unstable. If it's too low, training can be incredibly slow, or the model might get stuck in a suboptimal state. A common strategy is to start with a default value (e.g., 1e-4) and adjust it by factors of 10 (e.g., 1e-3 or 1e-5) to see how it affects performance. - -### Number of Epochs or Steps - -An epoch is one full pass through the entire training dataset. The number of epochs determines how many times the model gets to see the data. Too few epochs can lead to *underfitting*, where the model hasn't learned the patterns in the data. Too many epochs can cause *overfitting*, where the model memorizes the training data perfectly but fails to generalize to new, unseen situations. - -**Start training with a small number of epochs (eg: 1 or 2),** and then progressively scale up. - -To train your models for longer, consider using [phospho pro](https://app.phospho.ai) to unlock longer training times. - - -# What's next? - -AI robotics is the most complex and exciting field in robotics research. Keep in mind that many of the demos you see online are usually carefully staged, edited, and cherry-picked to show the best results. Sometimes, they are even pre-recorded. They are also the result of countless hours of work, trial and error, and iteration. So don't be discouraged if your first attempts don't go as planned! Keep improving and sharing your progress with the community. - - - - - Record datasets - - - Train your first AI model - - - Join the Discord to ask questions, get help from others, and get updates. - - - - - - - diff --git a/mintlify/learn/kinematics.mdx b/mintlify/learn/kinematics.mdx deleted file mode 100644 index 12b38eb..0000000 --- a/mintlify/learn/kinematics.mdx +++ /dev/null @@ -1,248 +0,0 @@ ---- -title: "Kinematics" -description: "How to move a robot arm to a specific position." ---- - -A robot consists of **actuators**, motors that move to a specific position when given a command. - -But how do we determine the exact commands needed to move the robot to a specific position in a 3D space? - -## Forward kinematics - -_Forward kinematics is the process of calculating the position and orientation of the end effector based on the given joint angles of the robot_. - -It can be represented as a function f that takes the joint angles q as input and returns the position of the end effector x,y, and z as well as its orientation, $\phi , \theta$ and $\psi$. - -$$ -\mathbf{x} = \mathbf{f}(\mathbf{q}) = -\begin{bmatrix} -x \\ -y \\ -z \\ -\phi \\ -\theta \\ -\psi -\end{bmatrix} -= -\begin{bmatrix} -f_1(q_1, q_2, \dots, q_n) \\ -f_2(q_1, q_2, \dots, q_n) \\ -f_3(q_1, q_2, \dots, q_n) \\ -f_4(q_1, q_2, \dots, q_n) \\ -f_5(q_1, q_2, \dots, q_n) \\ -f_6(q_1, q_2, \dots, q_n) -\end{bmatrix} -$$ - -For a simple one-joint robot, forward kinematics is straightforward and can be solved using trigonometry - -For robots with multiple joints, we can use the [Denavit-Hartenberg convention](https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters), a systematic way of representing link transformations. This allows us to determine the end effectorโ€™s position and orientation in a 3D coordinate system. -## Inverse kinematics - -_Inverse kinematics is the process of determining the joint angles required to place the end effector at a desired position and orientation._ - -It can be thought of as the **inverse of forward kinematics**: - -$$ -\mathbf{q} = \mathbf{f}^{-1}(\mathbf{x}) -$$ - -Inverse kinematics is essential for controlling the robot arm. It allows us to move the robot arm to a specific position and orientation in 3D space. - -It is a more complex process, as it involves solving a system of equations to determine the joint angles of the robot. -- There may be **multiple solutions** (different joint configurations that reach the same end effector position). -- Some positions may be **unreachable** due to mechanical constraints. -- It often requires solving **nonlinear equations**, which may not have a direct solution. - -To address these challenges, inverse kinematics is commonly solved as an **optimization problem**, minimizing the difference between the desired and actual end effector positions. - - -Learn more about Inverse Kinematics - - -## Moving a robot - -Since we control the actuators themselves, we can move the robot arm to a specific position by sending the joint angles to the actuators. This involves: - -1. **Reading the current joint angles** from the motors. - -2. **Solving the inverse kinematics problem** iteratively to converge towards the desired joint angles. - -3. **Sending these joint angles to the actuators** to move the robot to the desired position. - -The [phosphobot SDK](https://github.com/phospho-app/phosphobot) gives you **two different movement commands**: -- [move/absolute](/control/move-absolute-position): Move the robot arm to a specific position and orientation in 3D space. -- [move/relative](/control/move-relative-position): Move the robot arm by a specific distance and rotation in 3D space. - -## Simulation - -Simulations allow you to test movement models in **virtual environments** before applying them to a physical robot. - -They **replicate real-world physics** while remaining computationally efficient. - -Simulations are particularly useful for testing **inverse kinematics**, enabling safe and rapid iteration on motion planning before deploying on actual hardware. - - - -Note that **no simulation prevents the need for real-world testing.** Real world properties like materials, sensor noise, human interaction, and more can't be fully replicated. Simulations themselves can have bugs or inaccuracies that affect results. - -Simulations are here to help you iterate faster, expand your mathematical toolkit, and reduce the risk of damaging hardware. - -## Trade-offs when choosing a simulation backend - -- Simple tools are easy to use, but don't offer the same level of realism. They are also less extensible. -- More realistic simulations need more computing power, which can be slow or expensive. They are usually more difficult to setup and can be overkill for simple tasks. -- Open-source options are flexible and community-driven, while proprietary ones might be more polished but costly and hardware-specific. - -The right choice depends on your needs and resources. - -## PyBullet - -PyBullet is an open-source physics engine released in 2015. It is maintained by the community, but its maintenance status is currently inactive, with no new versions released in the past year. - -![PyBullet](/assets/pybullet.png) - -### Requirements - -PyBullet uses URDF (Unified Robot Description Format) files to define robot structures. -It's lightweight and runs on most modern computers without specialized hardware. - -### Features - -PyBullet offers a balance between realism and computational efficiency, suitable for real-time simulations and educational purposes. - -Despite being old, Pybullet is easy to set up, with extensive documentation and community support. - - - -**Pros:** Easy to use, good community support, open source. - -**Cons:** Lack advanced features, inactive maintenance. - - -Learn more about PyBullet - - - -## Gazebo - -Gazebo is a well-established open-source robotics simulator that has been widely used in academia and industry since its initial release in 2004. It is maintained by Open Robotics and has a strong community of contributors. - -![Gazebo](/assets/gazebo.gif) - -### Requirements - -Gazebo uses SDF (Simulation Description Format) files to define robot models and environments. It supports a wide range of operating systems, including Linux, macOS, and Windows, and can run on standard hardware without the need for specialized equipment. - -### Features - -Gazebo provides a robust simulation environment with realistic physics, sensor models, and a variety of plugins for customization. It is particularly well-suited for testing robotic algorithms in complex environments. - -Gazebo is known for its ease of integration with [ROS](https://www.ros.org), making it a popular choice for robotics research and development. - - - -**Pros:** Strong community support, extensive documentation, ROS integration, open source. - -**Cons:** Can be resource-intensive, may require additional setup for advanced features. - - -Learn more about Gazebo - - - -## MuJoCo - -MuJoCo (Multi-Joint dynamics with Contact) is a powerful physics engine designed for simulating complex robotic systems. Initially released in 2012, it is now maintained by Google DeepMind and has become a staple in robotics research and development. - -![MuJoCo](/assets/mujoco.png) - -### Requirements - -Required Files: MuJoCo uses XML files, specifically the MJCF (MuJoCo XML) format, to define robot models and environments. This format allows for detailed specification of the physical properties and dynamics of the simulated entities. - -Hardware Requirements: MuJoCo is optimized for performance and can run efficiently on standard hardware. However, it really shines by running on GPUs and TPUs, making it suitable for data-intensive tasks like reinforcement learning. - -### Features - -MuJoCo is best for contact dynamics and complex interactions. - -Mujoco is straightforward to set up, with comprehensive docs. The recent open-source release has made it more accessible to researchers and developers. - - - -**Pros:** High fidelity, performance, flexibility, community, open source. - -**Cons:** complexity, hardware requirements, specialized use. - - - -Learn more about MuJoCo - - - - - -## NVIDIA Isaac - -Release and Maintenance: NVIDIA Isaac was released in 2018 and is maintained by NVIDIA, with the latest update in January 2025. - -![Genesis](/assets/nvidia_isaac.gif) - -### Requirements - -Required Files: It uses USD (Universal Scene Description) files for defining environments and robots. - -Hardware Requirements: NVIDIA Isaac is optimized for NVIDIA GPUs, leveraging their power for high-performance simulations. - -### Features - -Nvidia Isaac provides a highly realistic simulation environment. It's great for autonomous systems. - -Setting up NVIDIA Isaac can be complex, requiring specific hardware and software configurations. - - - -**Pros:** High realism, optimized for NVIDIA hardware, strong support for AI applications. - -**Cons:** Requires NVIDIA hardware, complex setup, closed source. - - - -Learn more about Nvidia Isaac Sim - - - - -## Genesis - -Genesis is a physics simulation platform designed for general-purpose robotics, embodied AI, and physical AI applications. Developed by a consortium of researchers, it was released in December 2024 and is maintained by the Genesis-Embodied-AI community. - -![Genesis](/assets/genesis.webp) - -### Requirements - -Genesis supports MJCF (.xml), URDF, and 3D model formats like .obj and .stl. - -The framework leverages GPU-accelerated parallel computation, making it highly efficient on modern GPUs. The current version is mostly compatible with Linux and Nvidia GPUs (CUDA), while support for other platforms is under development. - -### Features - -Genesis excels in simulating a wide range of physical phenomena, including rigid body dynamics, fluid mechanics, and soft robotics. It aims to let you train AI models and test robotic systems in complex environments. - -Genesis is designed to be user-friendly with a Pythonic interface, making it accessible to both beginners and experienced developers. - - - -**Pros:** high speed, wide range of physicals models, open source. - -**Cons:** requires powerful hardware, most features still under development. - - -Learn more about Genesis - - - - \ No newline at end of file diff --git a/mintlify/learn/lerobot-dataset.mdx b/mintlify/learn/lerobot-dataset.mdx deleted file mode 100644 index 55a50e0..0000000 --- a/mintlify/learn/lerobot-dataset.mdx +++ /dev/null @@ -1,267 +0,0 @@ ---- -title: "LeRobot Dataset Format" -description: "Learn about the LeRobot dataset format: its structure, versions, how to use it, and common tips." ---- - -# What is the LeRobot Dataset Format? - -The LeRobot Dataset format is a standard way to organize and store robot learning data, making it easy to use with tools like PyTorch and Hugging Face. You can load a dataset from the Hugging Face Hub or a local folder with a simple command like `dataset = LeRobotDataset("lerobot/aloha_static_coffee")`. Once loaded, you can access individual data frames (like `dataset[0]`) which provide observations and actions as PyTorch tensors, ready for your model. - -A special feature of `LeRobotDataset` is `delta_timestamps`. Instead of just getting one frame, you can get multiple frames based on their time relationship to the frame you asked for. For example, `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` will give you the current frame and three previous frames (from 1 second, 0.5 seconds, and 0.2 seconds before). This is great for giving your model a sense of history. You can see more in the [1_load_lerobot_dataset.py example](https://github.com/huggingface/lerobot/blob/main/examples/1_load_lerobot_dataset.py). - -The format is designed to be flexible for different types of robot data, whether from simulations or real robots, focusing on camera images and robot states, but extendable to other sensor data. - - -## How is a LeRobot Dataset Organized on Disk? - -A LeRobot Dataset is organized on disk into specific folders for data (Parquet files), videos (MP4 files), and metadata (JSON/JSONL files). Here's a typical structure for a `v2.1` dataset: - -``` -/ -โ”œโ”€โ”€ data/ -โ”‚ โ””โ”€โ”€ chunk-000/ -โ”‚ โ”œโ”€โ”€ episode_000000.parquet -โ”‚ โ”œโ”€โ”€ episode_000001.parquet -โ”‚ โ””โ”€โ”€ ... -โ”œโ”€โ”€ videos/ -โ”‚ โ””โ”€โ”€ chunk-000/ -โ”‚ โ”œโ”€โ”€ observation.images.main/ (or your_camera_key_1) -โ”‚ โ”‚ โ”œโ”€โ”€ episode_000000.mp4 -โ”‚ โ”‚ โ””โ”€โ”€ ... -โ”‚ โ”œโ”€โ”€ observation.images.secondary_0/ (or your_camera_key_2) -โ”‚ โ”‚ โ”œโ”€โ”€ episode_000000.mp4 -โ”‚ โ”‚ โ””โ”€โ”€ ... -โ”‚ โ””โ”€โ”€ ... -โ”œโ”€โ”€ meta/ -โ”‚ โ”œโ”€โ”€ info.json -โ”‚ โ”œโ”€โ”€ episodes.jsonl -โ”‚ โ”œโ”€โ”€ tasks.jsonl -โ”‚ โ”œโ”€โ”€ episodes_stats.jsonl (for v2.1) or stats.json (for v2.0) -โ”‚ โ””โ”€โ”€ README.md (often, for Hugging Face Hub) -โ””โ”€โ”€ README.md (top-level, for Hugging Face Hub) -``` - -## How to Manipulate and Edit LeRobot Datasets? - -The common operations for manipulating and editing LeRobot datasets include: - -* **Repairing:** Fixing inconsistencies in metadata files (e.g., `episodes.jsonl`, `info.json`) or re-indexing episodes if files are added/removed manually. -* **Merging:** Combining two or more LeRobot datasets into a single, larger dataset. This requires careful handling of episode indices, frame indices, task mappings, and recalculating or merging statistics. -* **Splitting:** Dividing a dataset into multiple smaller datasets (e.g., a training set and a test set). This also involves re-indexing and adjusting metadata and statistics for each new split. - -In this video, PLB demonstrates how you can use `phosphobot` to perform common dataset operations. - -
- -
- -You can also use [python scripts](https://github.com/phospho-app/phosphobot/tree/main/scripts/datasets). Make sure to use well tested scripts and to version your datasets. - -## How to Visualize a LeRobotDataset? - -You can visualize a LeRobotDataset using the HuggingFace Visualize Dataset space, which leverages `rerun.io` to display camera streams, robot states, and actions. This is a convenient way to inspect your data, check for anomalies, or simply understand the recorded behaviors. - -![Visualize Dataset space](/assets/lerobot_dataset_visualizer.png) - -## What are the columns in a LeRobot Dataset? - -The core data in a LeRobot dataset consists of Parquet files containing trajectory information (like robot states and actions) and MP4 video files for camera observations. - -1. **Parquet Files (`data/chunk-000/episode_xxxxxx.parquet`):** - * These files store the step-by-step data for each robot episode. - * When loaded, this becomes part of a Hugging Face `Dataset` object (often named `hf_dataset` in the `LeRobotDataset` object). - * **Common features you'll find inside:** - * `observation.state` (list of numbers): Robot's state, like joint angles or end-effector position. - * `action` (list of numbers): The action taken, like target joint angles. - * `timestamp` (number): Time in seconds from the start of the episode. - * `episode_index` (integer): ID for the episode. - * `frame_index` (integer): ID for the frame *within* its episode (starts at 0 for each episode). - * `index` (integer): A unique ID for the frame across the *entire* dataset. - * `next.done` (true/false, optional): True if this is the last frame of an episode. - * `task_index` (integer, optional): Links to a task in `tasks.jsonl`. - -2. **Video Files (`videos/chunk-000/camera_key/episode_xxxxxx.mp4`):** - * Camera images are stored as MP4 videos to save space. - * Each MP4 file is usually one camera's view for one full episode. - * The `hf_dataset` (when loaded) will point to these video frames using a `VideoFrame` object for each camera observation (e.g., `observation.images.cam_high`): - * `VideoFrame = {'path': 'path/to/video.mp4', 'timestamp': time_in_video_seconds}`. - * The system uses this to grab the correct image from the video. - -## What Information is Stored in the LeRobot Metadata Files? - -LeRobot metadata files, found in the `meta/` directory, store crucial information about the dataset's structure, content, statistics, and individual episodes. - -1. **`info.json`:** Contains general information about the whole dataset. - * `codebase_version` (text): "v2.0" or "v2.1". Tells you how to read other metadata, especially stats. - * `robot_type` (text): What kind of robot was used. - * `fps` (number): The intended frames-per-second of the data. - * `total_episodes` (integer): How many episodes are in the dataset. - * `total_frames` (integer): Total number of frames across all episodes. - * `total_tasks` (integer): Number of different tasks defined. - * `total_videos` (integer): Total number of video files. - * `splits` (dictionary): Info on data splits, like `{"train": "0:N"}` means episodes 0 to N-1 are for training. - * `features` (dictionary): Very important! This describes every piece of data: its type, shape, and sometimes names. - * Example for `observation.state`: `{"dtype": "float32", "shape": [7], "names": ["joint1", ...]}` - * Example for a camera `observation.images.main`: - ```json - "observation.images.main": { - "dtype": "video", - "shape": [224, 224, 3], // height, width, channels - "names": ["height", "width", "channel"], - "info": { // Details about the video itself - "video.fps": 10, - "video.codec": "mp4v", - // ... other video details - } - } - ``` - * `camera_keys` (list of text, implied by `features`): Names for camera data, like `observation.images.main`. - -2. **`episodes.jsonl`:** A file where each line is a JSON object describing one episode. - * `episode_index` (integer): The episode's ID. - * `tasks` (list of text): List of task descriptions (e.g., "pick up the red block") for this episode. - * `length` (integer): Number of frames in this episode. - -3. **`tasks.jsonl`:** A file where each line is a JSON object linking task IDs to descriptions. - * `task_index` (integer): The ID used in the Parquet files. - * `task` (text): The actual task description. - -4. **`episodes_stats.jsonl` (for v2.1):** Each line is a JSON object with statistics for one episode. - * `episode_index` (integer): The episode ID. - * `stats` (dictionary): Contains stats (`{'max': ..., 'min': ..., 'mean': ..., 'std': ...}`) for each feature (like `observation.state`, `action`) *within that specific episode*. - * For images, stats (mean, std) are usually per-channel. - -5. **`stats.json` (for v2.0):** A single JSON file with statistics for the entire dataset combined. - * Similar structure to the `stats` object in `episodes_stats.jsonl`, but for all data. - - -## What are the Key Concepts and Important Fields in a LeRobot Dataset? - -Key concepts in a LeRobot dataset include different types of indices (episode, frame, global), timestamps, and specific fields like `action` and `observation.state` which have precise meanings. - -* **Indices:** - * `episode_index`: Identifies an episode (e.g., 0, 1, 2...). - * `frame_index`: Identifies a frame *within* an episode (e.g., 0, 1, ... up to `length-1`). It resets for each new episode. - * `index`: A global, unique ID for a frame across the *entire dataset*. For example, if episode 0 has 100 frames (index 0-99), and episode 1 has 50 frames, episode 1's frames would have global indices 100-149. -* **Timestamps:** - * `timestamp` (in Parquet files): Time in seconds from the start of the current episode for that frame. - * `VideoFrame.timestamp` (for video features): Time in seconds *within the MP4 video file* where that specific frame is. - * `fps` (in `info.json`): The intended frame rate. Ideally, `timestamp` should be close to `frame_index / fps`. -* **`action` field:** In robot learning, the `action` recorded at frame `t` is usually the action that *caused* the observation at frame `t+1`. For instance, if actions are target joint positions, `action[t]` might be the joint positions observed at `observation.state[t+1]`. -* **`observation.state` vs. `joints_position`:** The Python code example you saw might use `joints_position` for joint angles and `state` for something else (like end-effector pose). LeRobot examples often use `observation.state` more broadly for the robot's proprioceptive data (like joint positions). Always check the dataset's `info.json -> features` to know exactly what `observation.state` means for that specific dataset. - - -## What are Common Pitfalls and Best Practices for Working with LeRobot Datasets? - -Common pitfalls when working with LeRobot datasets include version incompatibilities and memory issues, while best practices involve using version 2.1, understanding feature definitions, and ensuring data consistency. - -1. **Hugging Face Hub:** - * LeRobot tools often use the `main` branch on the Hub, but some datasets have their latest data on the `v2.1` branch. Make sure the training script references your correct dataset branch. - * You'll need a Hugging Face token with write access to upload or change datasets on the Hub. -2. **Local Cache:** Datasets from the Hub usually download to `~/.cache/huggingface/lerobot`. You can change this with the `root` argument when loading. Cache can lead to issue: sometimes, if you change a dataset on the Hub, your local cache might not update automatically. If this is the case, **delete the local cache folder** for that dataset to force a fresh download. -3. **Version Choice:** **Strongly prefer `v2.1`**. It uses `episodes_stats.jsonl` (per-episode stats), making it easier to manage and modify datasets (delete, merge, split, shuffle). `v2.0` (with a single `stats.json`) is harder to keep correct if you change the dataset. -4. **`delta_timestamps` and History:** This is great for temporal context but be aware that asking for a long history (many previous frames) means loading more data for each sample, which uses more memory and can be slower. -5. **Feature Naming:** Use the dot-notation like `observation.images.camera_name` or `observation.state`. This is what LeRobot expects. -6. **Data Consistency:** - * Try to keep feature shapes (like the number of elements in `observation.state` or image sizes) the same, at least within an episode, and ideally across the whole dataset. If they vary, your code will need to handle it. - * `fps` should be consistent. If it varies, `delta_timestamps` might not give you the time intervals you expect. -7. **Video Encoding:** Videos are usually MP4, and only the **avc1** codec is visible in the LeRobot dataset viewer. LeRobot uses torchvision to decode video. Details like codec are listed in `info.json`. -8. **Generating Statistics:** If you make your own dataset, make sure the stats (`stats.json` or `episodes_stats.jsonl`) are correct. They are important for normalizing data during training. The `phosphobot` code has tools for this. -9. **`episode_data_index`:** The `LeRobotDataset` calculates this automatically when loaded. It helps quickly map global frame numbers to episode-specific frames, especially with `delta_timestamps`. -10. **Memory for Videos frames:** Loading many high-resolution videos (from `delta_timestamps`) can use a lot of memory. Choose video sizes that fit your needs and hardware. If you run into "Cuda out of memory" errors, lower the resolution of the videos. -11. **Action Definition:** Know exactly what `action` means in your dataset (e.g., target joint positions, joint velocities?). This is vital for training a policy. -12. **Adding Custom Data:** You can add your own observation or action types. Just make sure they can be turned into tensors and describe them in `info.json`. - -## LeRobot Dataset Versions - -LeRobot datasets have different versions (v1, v2, v2.1), with `v2.1` being the recommended version for most use cases. The version is specified in the `info.json` file under the `codebase_version` field. - -### What are the Differences Between LeRobot v2.0 and v2.1 Dataset Versions? - -The main differences between LeRobot `v2.0` and `v2.1` dataset versions lie in how they store statistics and support dataset modifications, with `v2.1` being the recommended, more flexible version. - -* **`lerobot_v2.0` (Older):** - * Uses one file, `meta/stats.json`, to store statistics (like mean, min, max) for the entire dataset. - * Modifying the dataset (like deleting an episode) is not well-supported with this version because updating these global statistics is tricky. -* **`lerobot_v2.1` (Recommended):** - * Uses `meta/episodes_stats.jsonl` instead of `stats.json`. - * This file stores statistics *for each episode separately*. Each line in the file is for one episode and its stats. - * This makes it much easier to manage the dataset, like deleting, merging, or splitting episodes, because stats can be updated or recalculated more easily for the affected parts. - * The `info.json` file will clearly state `codebase_version: "v2.1"`. - * **Recommendation:** Always try to use or convert datasets to `v2.1` for the best experience and support. - -Tooling around LeRobot, like the `phosphobot` code, usually handles both versions, but `v2.1` gives you more power. - - -### What's New in the Upcoming LeRobot v3.0 Dataset Format? - -The upcoming LeRobot v3.0 dataset format introduces significant changes aimed at improving scalability, data organization, and efficiency, particularly for handling very large datasets. The primary rationale appears to be a move towards a more sharded and consolidated data structure, where episode data, videos, and metadata are grouped into larger, chunked files rather than per-episode files. This is evident from the conversion script `convert_dataset_v21_to_v30.py` (from [Pull Request #969 on GitHub](https://github.com/huggingface/lerobot/pull/969)), which details the transformation from v2.1 to v3.0. - -**Key Changes from v2.1 to v3.0:** - -1. **Consolidation of Episode Data and Videos:** - * **Old (v2.1):** Each episode had its own Parquet file (`data/chunk-000/episode_000000.parquet`) and its own video file per camera (`videos/chunk-000/CAMERA/episode_000000.mp4`). - * **New (v3.0):** Multiple episodes' data will be concatenated into larger Parquet files (e.g., `data/chunk-000/file_000.parquet`). Similarly, videos from multiple episodes for a specific camera will be concatenated into larger video files (e.g., `videos/chunk-000/CAMERA/file_000.mp4`). - * The target size for these concatenated files seems to be configurable (e.g., `DEFAULT_DATA_FILE_SIZE_IN_MB`, `DEFAULT_VIDEO_FILE_SIZE_IN_MB`). - -2. **Restructuring of Metadata Files:** - * **`episodes.jsonl` (Old v2.1):** A single JSON Lines file where each line detailed an episode (`episode_index`, `tasks`, `length`). - * **`meta/episodes/chunk-000/episodes_000.parquet` (New v3.0):** This information, along with new indexing details (pointing to the specific chunk and file for data and video, and `from/to_timestamp` for video segments), will now be stored in sharded Parquet files. The schema will include columns like `episode_index`, `video_chunk_index`, `video_file_index`, `data_chunk_index`, `data_file_index`, `tasks`, `length`, `dataset_from_index`, `dataset_to_index`, and video timestamp information. - * **`tasks.jsonl` (Old v2.1):** A single JSON Lines file mapping `task_index` to `task` description. - * **`meta/tasks/chunk-000/file_000.parquet` (New v3.0):** Task information will also be stored in sharded Parquet files (e.g., columns `task_index`, `task`). - * **`episodes_stats.jsonl` (Old v2.1):** Per-episode statistics in a JSON Lines file. - * **`meta/episodes_stats/chunk-000/file_000.parquet` (New v3.0):** Per-episode statistics will also move to sharded Parquet files, likely containing `episode_index` and flattened statistics (mean, std, min, max for various features). - -3. **Updates to `meta/info.json`:** - * `codebase_version` will be updated to `"v3.0"`. - * Fields like `total_chunks` and `total_videos` (which were aggregates) might be removed or rethought, as chunking is now more explicit. - * New fields like `data_files_size_in_mb` and `video_files_size_in_mb` will specify the target sizes for the concatenated files. - * `data_path` and `video_path` will reflect the new `file_xxx.parquet/mp4` naming scheme. - * FPS information will be added to features in `info["features"]` if not already present in video-specific info. - -4. **Removal of `stats.json`:** The script explicitly mentions removing the deprecated `stats.json` (which was already superseded by `episodes_stats.jsonl` in v2.1). Global aggregated stats will now be computed from the sharded per-episode stats. - -**Rationale Behind v3.0 Changes:** - -* **Scalability for Large Datasets:** The most significant driver appears to be improved handling of massive datasets (like DROID, mentioned in the PR diffs). - * Having fewer, larger files reduces filesystem overhead (e.g., inode limits) and can be more efficient for I/O operations, especially in distributed computing environments (like SLURM, also mentioned). - * Concatenating data into larger chunks makes sharding and parallel processing more manageable. -* **Efficiency:** Reading fewer, larger files can sometimes be faster than reading many small files. -* **Standardization with Parquet for Metadata:** Moving more metadata (episodes, tasks, episode_stats) into Parquet files brings consistency and allows leveraging the benefits of the Parquet format (columnar storage, compression, schema evolution) for metadata as well. -* **Hub Management:** The script includes steps for updating tags and cleaning up old file structures on the Hugging Face Hub, indicating a more robust versioning and deployment strategy. - -In essence, LeRobot v3.0 is evolving to become a more robust and scalable format, better suited for the increasingly large and complex datasets used in robotics research. While it introduces changes to the underlying file structure and metadata organization, the goal is to enhance performance and manageability without sacrificing the core ease of use provided by the `LeRobotDataset` abstraction. - - -# Whatโ€™s next? - -Next, record your own dataset and use it to train a policy! - - - - Recorde your first dataset - - - Train your first AI model - - - Join the Discord to ask questions, get help from others and get updates (we ship almost daily) - - - -For more information about LeRobot, checkout the [LeRobot Github repository](https://github.com/huggingface/lerobot) \ No newline at end of file diff --git a/mintlify/learn/overview.mdx b/mintlify/learn/overview.mdx deleted file mode 100644 index ae52758..0000000 --- a/mintlify/learn/overview.mdx +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: "Core Robotics Concepts" -description: "The basics to get started in robotics." ---- - -In this article, we'll cover some of the core concepts in robotics. We will also cover some concepts of AI and machine learning that are relevant to robotics. - -## What are the components of a robot? - -Your robot is made up of a few key components: - -![junior](/assets/junior.jpg) - -- **Joints/Actuators**: The motors of the robot that allow the links to move relative to each other. -- **Links**: The rigid plastic parts of the robot that are connected by joints. -- **End effector**: The last part of the robot that interacts with the environment (e.g., a gripper). -- **Sensors**: The robot's eyes and ears. They allow the robot to perceive the world around it. -- **Controller**: The robot's brain. It processes the sensor data and sends commands to the actuators. - -The main challenge is coordinating all these components to perform specific tasks. - -How do you program a robot to do something? How can it learn to do something new? How can it adapt to changes in its environment? These are all questions robotics researchers are trying to answer. - -The most basic way to control a robot is to send it a sequence of commands. For example, you can tell the robot to move its arm to a specific position. This is called **kinematics**. - - -Learn more about kinematics - - -## What is a policy in robotics? - -A policy defines how the robot makes decisions and takes actions based on its environment. It is a function that maps the current state of the robot to an action. In concrete terms, it tells the robot what to do in a given situation. - -For example, a policy for a robot vacuum cleaner might be: -- If the robot detects dirt, move towards it. -- If the robot detects a wall, turn left. - - -Learn more about policies and AI in robotics - - -## Vocabulary of AI robotics - -The vocabulary of AI robotics can be a bit confusing. Here are some key terms to help you understand the concepts better. - -### Imitation learning vs. Reinforcement learning - -- **Imitation learning**: A type of training where the robot learns from examples provided by a human or another robot. This is often used to teach the robot how to perform specific tasks. -- **Reinforcement learning**: A type of training where the robot learns by trial and error. It receives rewards for good actions and penalties for bad actions, and it adjusts its policy accordingly. - -### Robotics vocabulary - -The robotics vocabulary heavily relies on the concepts of reinforcement learning. Here are some key terms: - -- **Agent**: The robot itself. It interacts with the environment and learns from its experiences. -- **State**: The current configuration of the robot. This includes the positions of the joints, the orientation of the end effector, and any sensor readings. -- **Action**: The movement or command the robot executes in response to its current state. -- **Reward**: A numerical value assigned based on how well the robot's action achieves its objective. The robot's goal is to maximize this reward over time. -- **Environment**: The world in which the robot operates. This includes the objects in the environment, the robot itself, and any other agents. -- **Policy**: The strategy the robot uses to decide which action to take in a given state. It can be deterministic (always taking the same action in the same state) or stochastic (taking different actions in the same state). [Learn more](/learn/policies) - -### Machine learning vocabulary - -As policies are often learned from data, the vocabulary of machine learning is also relevant. Here are some key terms: - -- **Model**: A mathematical representation of the policy. It takes the current state as input and outputs the action to be taken. -- **Training**: The process of teaching the robot to improve its policy. This is done by providing it with examples of good and bad actions and adjusting its policy based on the rewards it receives. -- **Inference**: The process of using a trained policy to control the robot in real-time. This is done by feeding the robot's current state into the policy and executing the resulting action. -- **Dataset**: A collection of examples used to train the robot's policy. This can include images, sensor readings, natural languages instructions, and actions taken by the robot. [The most common format is LeRobot v2](/learn/lerobot-dataset) \ No newline at end of file diff --git a/mintlify/learn/policies.mdx b/mintlify/learn/policies.mdx deleted file mode 100644 index 7f22134..0000000 --- a/mintlify/learn/policies.mdx +++ /dev/null @@ -1,218 +0,0 @@ ---- -title: "Policies in AI Robotics" -description: "What are the latest AI robotics models?" ---- - -Recently, AI robotics has seen a surge of interest, thanks to the rise of a new generation of policies: **Vision-Language Action Models** (VLAs). - -phosphobot makes it easy to train and deploy VLAs. You can use them to control your robot in a variety of tasks, such as picking up objects and understanding natural language instructions. - -In this guide, we'll show you the latest models in AI robotics and give you useful resources to get started with training your own policies. - -## What is a policy? - -A **policy** is the brain of your robot. It tells the robot what to do in a given situation. Mathematically, it's a function $$\pi$$ that maps the current **state** $$S$$ of the robot to an **action** $$A$$. - -$$ -\pi: S \rightarrow A -$$ - -- $$S$$ the state is usually the position of the robot, the cameras and sensors feed, and the text instructions. -- $$A$$ the actions depends on the robot. For example, high level instructions ("move left", "move right"), the *6-DOF* (degrees of freedom) cartesian position (x, y, z, rx, ry, rz), the angles of the joints... -- $$\pi$$ the policy is basically the AI model that controls the robot. It can be as simple as a **hard-coded rule** or as complex as a **deep neural network**. - -Recent breakthrough have allowed to leverage the **[transformer](https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture))** architecture and **internet-scale data** to train more advanced policies, that radically differ from old school robotics and reinforcement learning. - - -The traditional way to control robots is to use **hard-coded rules**. - -For example, you could write a program that tells the robot to move left when it sees a red ball. For that, you'd look for red pixels in the camera feed, and send a command to turn motor number 1 by 90 degrees if you see a cluster of red pixels. - -This approach is the one used in **industrial robots** and **simple home robots**. It's simple and efficient, but it's not very flexible. You need to write a new program for every new task. - - - -**Reinforcement Learning (RL)** is another approach to train policies (since the 1990s and mainstream since the 2010s). In RL, the robot learns by interacting with the environment and receiving rewards. It's like teaching a child to ride a bike by giving them feedback on their performance. - -Usually, the environment is a [simulation.](./kinematics#simulation) Today, it's sucessful for walking robots that need to learn how to balance themselves. - - -## Vision-Language Action Models (VLAs) - -The latest paradigm since 2024 in AI robotics are **[Vision-Language Action Models](https://arxiv.org/abs/2406.09246) (VLAs)**. They leverage **[Large Language Models](https://en.wikipedia.org/wiki/Large_language_model) (LLMs)** to understand and act on human instructions. - -- VLA models are particularly well-suited for robotics because **they function as a brain**. -- VLA process both **images** and **text** instructions to predict the next **action**. -- VLA were trained using **internet-scale data**, so they have some **common sense**. - -Unlike AI models that generate text (like ChatGPT), these models output actions, such as *move left*. - -Essentially, with VLA, you could prompt your robot to "pick up the red ball" and it would do so. - -The [phospho starter pack](https://robots.phospho.ai) helps you learn and experiment with VLAs. - -## What are the latest architectures in AI robotics? - -Since 2024, there have been breakthroughs in AI robotics. Here are some of the latest ideas in AI robotics. - -### ACT (Action Chunking Transformer) - -[ACT (Action Chunking Transformer)](https://github.com/Shaka-Labs/ACT) (October 2024) is a popular repo that that showcases how to use transformers for robotics. The model is trained to predict the action sequences based on the current state of the robot and cameras' images. ACT is an efficient way to do imitation learning. [Learn more.](https://arxiv.org/abs/2406.09246) - - -**Imitation Learning** is a popular approach to train AI models for robotics. In imitation learning, the robot learns by mimicking human demonstrations. It's like teaching a child to ride a bike by showing them how it's done. - -Usually, the demonstrations are collected by **teleoperating** the robot. The robot learns to mimic the actions of the human operator. It's mainly used for tasks that require human-like dexterity, such as picking up objects. - - -![ACT model architecture](/assets/policies-act.png) - -**How it works**: -- You record episodes of your robot performing a task. (e.g., picking up a lego brick). -- The model learns from this data and enacts a policy based on it. (e.g., it will pick up the lego brick no matter where it is placed). - -**Why use ACT?** -- Typically requires ~30 episodes for training -- Can run on an RTX 3000 series GPU in less than 30 minutes. -- This is a great starting point to get your hands dirty with AI in robotics. -- You don't need prompts to train the model. - - - A few dozens of episodes are enough to train ACT to reproduce human demonstrations. - - -### OpenVLA - -[OpenVLA](https://github.com/openvla/openvla?tab=readme-ov-file#getting-started) (June 2024) is a great repo that showcases a more advanced model designed for **complex robotics tasks**. The architecture of OpenVLA include a [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) model (July 2023) that receives a prompt describing the task. This gives the model some common sense and allows it to generalize to new tasks. - -![OpenVLA model architecture](/assets/policies-openvla.png) - -**Key differences with ACT:** -- Training such a model requires more data and computational power. -- Typically needs ~100 episodes for training -- Training takes a few hours on an NVIDIA A100 GPU. - -For more details, check out [Nvidia's blog post](https://www.jetson-ai-lab.com/openvla.html) on OpenVLA and the [arxiV paper](https://arxiv.org/pdf/2406.09246). - -### Diffusion Transformers - -**Diffusion transformers** are a family of models based on the **[diffusion process](https://en.wikipedia.org/wiki/Diffusion_model)**. Instead of deterministically mapping states to actions, the model **hallucinates** (generates) the **most probable next action** based on **patterns learned from data**. You can also see this as **denoizing** actions. This mechanism is common to many image generation models (e.g., DALL-E, Stable Diffusion, Midjourney...) - -![Diffusion transformer model architecture](/assets/policies-rdt.png) - -**Why consider Diffusion Transformers?** -- The currently **#1 model in robotics** on Hugging Face is a diffusion transformer called [RDT-1b](https://huggingface.co/robotics-diffusion-transformer/rdt-1b) (May 2024) -- Fine tuning the model on your own data is expensive but inference is fast. - -## What are the latest models in AI robotics? - -Here are some of the latest models that combine ideas from ACT, OpenVLA, and Diffusion Transformers. - -### gr00t-n1-2B and gr00t-n1.5-3B by Nvidia - -[GR00T-N1 (Generalist Robot 00 Technology)](https://github.com/NVIDIA/Isaac-GR00T) (March 2025) is NVIDIA's foundation model for robots. It's a performant models, trained on lots of data, which is ideal for fine tuning. The model weights [are available on Hugging Face](https://huggingface.co/nvidia/GR00T-N1-2B). - -GR00T-N1 combines both [VLA](#openvla) for language understanding and [Diffusion transformers](#diffusion-transformers) for fine grained controls. For details, see their [paper on arxiv](https://arxiv.org/abs/2503.14734) - -![GR00T-N1 model architecture](/assets/policies-gr00t.png) - -**Key features:** -- Processes natural language instructions, camera feeds, and sensor data to generate actions. -- Based on denoizing of the action space, kind of like a Diffusion transformer. -- Trained on a massive datasets of human movements, 3D environments, and AI-generated data. - -**Why use GR00T-N1?** -- Typically requires ~50 episodes for training. -- Supports prompting and zero-shot learning for tasks not explicitly seen during training. -- Small model size (2B parameters) for efficient fine-tuning and fast inference on Nvidia Jetson devices. - - -[GR00T N1.5](https://huggingface.co/nvidia/GR00T-N1.5-3B) (June 2025) is an updated version of Nvidia's open foundation model for humanoid robots. It's also open source, but has 3B parameters instead of 2B like gr00t n1. The model weights are available on [Hugging Face](https://huggingface.co/nvidia/GR00T-N1.5-3B). - -Key differences with gr00t n1.5 are: -- The VLM is frozen during both pretraining and finetuning. -- The adapter MLP connecting the vision encoder to the LLM is simplified and adds layer normalization to both visual and text token embeddings input to the LLM. - - - The gr00t-N1.5 model is a promptable model by NVIDIA - - -### SmolVLA by Hugging Face - -[SmolVLA](https://huggingface.co/blog/smolvla) (June 2025) is a small, open-source Vision-Language-Action (VLA) model from Hugging Face designed to be efficient and accessible. It was created as a lightweight, reproducible, and performant alternative to large, proprietary models that often have high computational costs. The model, whose weights are available on [Hugging Face](https://huggingface.co/collections/smol-ai/smolvla-665893a9033433a047029562), was trained entirely on publicly available, community-contributed datasets. - -It's a 450M parameters model, trained with 30,000 hours of compute. - -![SmolVLA model architecture](/assets/policies-smolvla.png) - -**How it works**: -* SmolVLA has a modular architecture with two main parts: a vision-language model (a cut-out SmolVLM) that processes images and text, and an "action expert" that generates the robot's next moves. -* The action expert is a compact transformer that uses a flow matching objective to predict a sequence of future actions in a non-autoregressive way. -* The model needs to be fine-tuned on a specific robot and task. Fine-tuning takes about 8 hours on a single NVIDIA A100 GPU. - - - SmolVLA is an open-source model by LeRobot - - -### pi0, pi-0 FAST, and pi0.5 by Physical Intelligence - -[pi0](https://github.com/Physical-Intelligence/openpi) (October 2024), also written as **ฯ€โ‚€** or pi zero, is a a flow-based diffusion vision-language-action model (VLA) by Physical Intelligence. The weight of pi0 are open sourced [on Hugging Face](https://huggingface.co/blog/pi0). [Learn more.](https://www.physicalintelligence.company/blog/pi0) - -![pi0 model architecture](/assets/policies-pi0.png) - -[pi0 FAST](https://github.com/Physical-Intelligence/openpi) (February 2025), also written as **ฯ€โ‚€-FAST** or pi zero FAST, is an **autoregressive VLA**, based on the FAST action tokenizer. Similar to how LLMs generate text token by token, pi0 FAST generates actions token by token. [Learn more.](https://www.physicalintelligence.company/research/fast) - -![pi0 FAST model architecture](/assets/policies-pi0-fast.png) - -[pi0.5](https://www.physicalintelligence.company/blog/pi05) (April 2025) is a Vision-Language-Action model by Physical Intelligence that focuses on "open-world generalization." It's designed to enable robots to perform tasks in entirely new environments that they have not seen during training, a significant step toward creating truly general-purpose robots for homes and other unstructured spaces. While the [research](https://www.physicalintelligence.company/download/pi05.pdf) and results are public, the model itself is not open-source. - -![pi0.T model architecture](/assets/policies-pi0.5.png) - - - Head over to phospho cloud to start training pi0.5 on your own dataset. - - - -### RT-2 and AutoRT by Google DeepMind - -[**RT-2**](https://github.com/kyegomez/RT-2) (July 2023) is Google DeepMind's twist on VLAs. It's a closed-source model, very similar to OpenVLA. based on the Palm architecture. The model is trained on a large dataset of human demonstrations. [Learn more.](https://arxiv.org/pdf/2307.15818) - -![RT-2 model architecture](/assets/policies-rt2.png) - -[**AutoRT**](https://github.com/kyegomez/AutoRT) (January 2024) is a framework by Google DeepMind, designed for robot fleets and data collection. A LLM is used to generate "to do lists" for robots based on descriptions of the environment. The to do lists tasks are then executed by teleoperators, a scripted pick policy, or RT-2 (Google's VLA). [Learn more.](https://auto-rt.github.io/static/pdf/AutoRT.pdf) - -![AutoRT model architecture](/assets/policies-autort.png) - -## LeRobot Integration - -[LeRobot is a github repo by Hugging Face](https://github.com/huggingface/lerobot/tree/main/lerobot/common/policies) which implements training scripts for various policies in a standardized way. Supported policies include: - -- act -- diffusion -- pi0 -- tdmpc (September 2022) -- vqbet (October 2023) - -## More models - -Here is [a list](https://github.com/epoch-research/robotic-manipulation-compute/blob/main/data/Robotics%20Models.csv) compiling more references. \ No newline at end of file diff --git a/mintlify/learn/train-smolvla.mdx b/mintlify/learn/train-smolvla.mdx deleted file mode 100644 index 549afab..0000000 --- a/mintlify/learn/train-smolvla.mdx +++ /dev/null @@ -1,343 +0,0 @@ ---- -title: "Train SmolVLA" -description: "How to Train and Run SmolVLA with LeRobot: A Step-by-Step Guide" ---- - -In this tutorial, we will walk you through the process of fine-tuning a SmolVLA model and deploying it on a real robot arm. We will cover environment setup, training, inference, and common troubleshooting issues. - - -This tutorial is for LeRobot by Hugging Face, which is different than phosphobot. It's geared towards more advanced users with a good understanding of Python and machine learning concepts. If you're new to robotics or AI, we recommend starting with the [phosphobot documentation](https://docs.phospho.ai/). - - - -**This tutorial may be outdated** - -The [LeRobot](https://github.com/huggingface/lerobot) library is under active development, and the codebase changes frequently. While this tutorial is accurate as of June 11, 2025, some steps or code fixes may become obsolete. Always refer to the official LeRobot documentation for the most up-to-date information. - - - -## What is LeRobot by Hugging Face? - -![LeRobot logo](https://cdn-uploads.huggingface.co/production/uploads/631ce4b244503b72277fc89f/MNkMdnJqyPvOAEg20Mafg.png) - -LeRobot is a platform designed to make real-world robotics more accessible for everyone. It provides pre-trained models, datasets, and tools in PyTorch. - -It focuses on state-of-the-art approaches in **imitation learning** and **reinforcement learning**. - -With LeRobot, you get access to: - -- Pretrained models for robotics applications -- Human-collected demonstration datasets -- Simulated environments to test and refine AI models - -Useful links: - -- [LeRobot on GitHub](https://github.com/huggingface/lerobot) -- [LeRobot on Hugging Face](https://huggingface.co/lerobot) -- [AI models for robotics](https://huggingface.co/models?pipeline_tag=robotics&sort=trending) - -### Introduction to SmolVLA - -SmolVLA is a 450M parameter, open-source Vision-Language-Action (VLA) model from Hugging Face's LeRobot team. It's designed to run efficiently on consumer hardware by using several clever tricks, such as skipping layers in its Vision-Language Model (VLM) backbone and using asynchronous inference to compute the next action while the current one is still executing. - -- [arxiv paper](https://arxiv.org/abs/2506.01844) -- [blog post](https://huggingface.co/blog/smolvla) -- [model card](https://huggingface.co/lerobot/smolvla_base) - -### Part 1: Training the SmolVLA Model with LeRobot by Hugging Face - - - -#### 1.1 Environment Setup for LeRobot by Hugging Face - -Setting up a clean Python environment is crucial to avoid dependency conflicts. We recommend [using `uv`, a fast and modern Python package manager.](https://docs.astral.sh/uv/getting-started/installation/) - -1. **Install `uv`:** - ```bash - curl -LsSf https://astral.sh/uv/install.sh | sh - ``` - -2. **Clone the LeRobot Repository:** - ```bash - git clone https://github.com/huggingface/lerobot.git - cd lerobot - ``` - > **๐Ÿ’ก Pro Tip:** Before you start, run `git pull` inside the `lerobot` directory to make sure you have the latest version of the library. - -3. **Create a Virtual Environment and Install Dependencies:** - This tutorial uses Python 3.10. - - ```bash - # Create and activate a virtual environment - uv venv - source .venv/bin/activate - - # Install SmolVLA and its dependencies - uv pip install -e ".[feetech,smolvla]" - ``` - -#### 1.2 Training on a GPU-enabled Machine with LeRobot by Hugging Face - -Training a VLA model is computationally intensive and requires a powerful GPU. This example uses an Azure Virtual Machine with an NVIDIA A100 GPU, but any modern NVIDIA GPU with sufficient VRAM should work. - -> **Note on MacBook Pro:** While it's technically possible to train on a MacBook Pro with an M-series chip (using the `mps` device), it is extremely slow and not recommended for serious training runs. - -1. **The Training Command:** - We will fine-tune the base SmolVLA model on a "pick and place" dataset from the Hugging Face Hub. - - ```bash - # We recommend using tmux to run the training session in the background - tmux - - # Start the training - uv run lerobot/scripts/train.py \ - --policy.path=lerobot/smolvla_base \ - --dataset.repo_id=PLB/phospho-playground-mono \ - --batch_size=256 \ - --steps=30000 \ - --wandb.enable=true \ - --save_freq=5000 \ - --wandb.project=smolvla - ``` - * `--save_freq`: Saves a model checkpoint every 5000 steps, which is useful for not losing your work. - > **Note on WandB:** As of June 11, 2025, Weights & Biases logging (`wandb`) may have issues in the current version of LeRobot. If you encounter errors, you can disable it by changing the flag to `--wandb.enable=false`. - -2. **Fixing config.json** You need to change `n_action_steps` in the `config.json` file. The default value is set to 1, but for inference on SmolVLA, it should be set to 50. This is only used during inference, but it's easier to fix it now rather than later (before uploading the model to the Hugging Face Hub). - - * **Locate the config.json file:** It will be in the `lerobot/smolvla_base` directory. - * **Edit the file:** Open it in a text editor and change the line: - ```json - "n_action_steps": 1, - ``` - to - ```json - "n_action_steps": 50, - ``` - - > **Note:** If you don't change this, the inference will be very slow, as the model will only predict one action at a time instead of a sequence of actions. - -3. **Uploading the Model to the Hub:** - Once training is complete, you'll need to upload your fine-tuned model to the Hugging Face Hub to use it for inference. - - * **Login to your Hugging Face account:** - ```bash - huggingface-cli login - ``` - * **Upload your model checkpoint:** The trained model files will be in a directory like `outputs/train/YYYY-MM-DD_HH-MM-SS/`. - ```bash - # Replace with your HF username, desired model name, and the actual output path - huggingface-cli upload your-hf-username/your-model-name outputs/train/2025-06-04_18-21-25/checkpoints/last/pretrained_model pretrained_model - ``` - - -### Part 2: Training on Google Colab with LeRobot by Hugging Face - -Running inference is often done on a different machine. Google Colab is a popular choice, but it comes with its own set of challenges. - -1. **Initial Setup on Colab:** - Start by cloning the repository. - ```python - # Use --depth 1 for a faster, shallow clone - !git clone --depth 1 https://github.com/huggingface/lerobot.git - %cd lerobot - !pip install -e ".[smolvla]" - ``` -2. **Fixing the `torchcodec` Error:** - You will likely encounter a `RuntimeError: Could not load libtorchcodec`. This is because the default PyTorch version in Colab is incompatible with the `torchcodec` version required by LeRobot. - - **The fix is to downgrade `torchcodec`:** - ```python - !pip install torchcodec==0.2.1 - ``` - After running this, you must **restart the Colab runtime** for the change to take effect. - -3. **Avoiding Rate Limits:** - Colab instances share IP addresses, which can lead to getting rate-limited by the Hugging Face Hub when downloading large datasets. If you see `HTTP Error 429: Too Many Requests`, you have two options: - * **Wait:** The client will automatically retry with an exponential backoff. - * **Use a Local Dataset:** Download the dataset to your Google Drive, mount the drive in Colab, and point the script to the local path instead of the `repo_id`. - - -### Part 3: LeRobot training Advanced Troubleshooting & Code Fixes - -Here are some other common issues you might face and how to solve them. - -#### Issue: `ffmpeg` or `libtorchcodec` Errors on macOS -* **Problem:** On macOS, you might encounter `RuntimeError`s related to `ffmpeg` or shared libraries not being found, even if they are installed. This is often a dynamic library path issue. -* **Fix:** Explicitly set the `DYLD_LIBRARY_PATH` environment variable to include the path where Homebrew installs libraries. - ```bash - # Add this to your ~/.zshrc or ~/.bashrc file for a permanent fix - export DYLD_LIBRARY_PATH="/opt/homebrew/lib:/usr/local/lib:$DYLD_LIBRARY_PATH" - ``` - -#### Issue: `ImportError: cannot import name 'GradScaler'` -* **Problem:** This error occurs if your PyTorch version is too old. SmolVLA requires `torch>=2.3.0`. -* **Fix:** Upgrade PyTorch in your `uv` environment. - ```bash - uv pip install --upgrade torch - ``` - -### Part 4: Running Inference on a Real SO-100 or SO-101 Robot with LeRobot by Hugging Face - - - -The LeRobot library is integrated with the SO-100 and SO-101 robots, allowing you to run inference directly on these devices. This section will guide you through the hardware setup, calibration, and running the inference script with LeRobot. - - -You can use the [robots from our dev kit](https://robots.phospho.ai) for this step. However, the LeRobot setup is different and completly independent from phosphobot. Be careful and do not mix the two setups. - - -#### 2.1 LeRobot Hardware Setup and Calibration - -1. **Hardware Connections:** - * Connect both your **leader arm** and **follower arm** to your computer via USB. - * Connect your cameras (context camera and wrist camera). - -2. **Finding Robot Ports:** - Run this script to identify the USB ports for each arm. - ```bash - uv run lerobot/scripts/find_motors_bus_port.py - ``` - Note the port paths (e.g., `/dev/tty.usbmodemXXXXXXXX`). - -3. **Calibrating the Arms:** - The calibration process saves a file with the min/max range for each joint. - * **Follower Arm:** - ```bash - uv run python -m lerobot.calibrate --robot-type=so100_follower --robot-port=/dev/tty.usbmodemXXXXXXXX --robot-id=follower_arm - ``` - * **Leader Arm:** - ```bash - uv run python -m lerobot.calibrate --robot-type=so100_leader --robot-port=/dev/tty.usbmodemYYYYYYYY --robot-id=leader_arm - ``` - -4. **Test Calibration with Teleoperation:** - Before running the AI, verify that the calibration works by teleoperating the robot. This lets you control the follower arm with the leader arm. - - ```bash - uv run python -m lerobot.teleoperate \ - --robot-type=so100_follower \ - --robot-port=/dev/tty.usbmodemXXXXXXXX \ - --robot-id=follower_arm \ - --teleop-type=so100_leader \ - --teleop-port=/dev/tty.usbmodemYYYYYYYY \ - --teleop-id=leader_arm - ``` - If the follower arm correctly mimics the movements of the leader arm, your calibration is successful. - -5. **Finding Camera Indices:** - Run this script to list all connected cameras and their indices. - ```bash - uv run lerobot/scripts/find_cameras.py opencv - ``` - Identify the indices for your context and wrist cameras. - -#### 2.2 Running the LeRobot Inference Script - -This is the main command to make the robot move. - -```bash -uv run python -m lerobot.record \ ---robot-type=so100_follower \ ---robot-port=/dev/tty.usbmodemXXXXXXXX \ ---robot-cameras="{ 'images0': {'type': 'opencv', 'index_or_path': 1, 'width': 320, 'height': 240, 'fps': 30}, 'images1': {'type': 'opencv', 'index_or_path': 2, 'width': 320, 'height': 240, 'fps': 30}}" \ ---robot-id=follower_arm \ ---teleop-type=so100_leader \ ---teleop-port=/dev/tty.usbmodemYYYYYYYY \ ---teleop-id=leader_arm \ ---display-data=false \ ---dataset-repo-id=your-hf-username/eval_so100 \ ---dataset-single-task="Put the green lego brick in the box" \ ---policy-path=oulianov/smolvla-lego -``` -* `--policy-path`: Note that this time we do not add the `/pretrained_model` subfolder. We will fix this in the code. - -### Part 5: LeRobot Troubleshooting and Code Fixes - -#### Issue 1: Unit Mismatch (Radians vs. Degrees) -* **Problem:** The SmolVLA model outputs actions in the same units as its training data. Some datasets use radians. For example, the datasets recorder with phosphobot such as `PLB/phospho-playground-mono` uses radians. However, the LeRobot SO-100 driver expects actions in degrees. This will cause the robot to move erratically or barely at all. -* **Fix:** Convert the model's output from radians to degrees. - - * **File:** `lerobot/common/policies/smolvla/modeling_smolvla.py` - * **Location:** In the `select_action` method. - * **Code:** Add the following lines just after the `# Unpad actions` section. - ```python - # # # START HACK # # # - # Convert from radians to degrees - actions = actions * 180.0 / math.pi - # # # END HACK # # # - ``` - -#### Issue 2: Flimsy Leader Arm Connection -* **Problem:** The leader arm can sometimes have an unstable connection, causing the calibration or teleoperation script to crash if it fails to read a motor position. -* **Fix:** Add a `try-except` block to gracefully handle connection errors. - * **File:** `lerobot/common/robot/motors_bus.py` - * **Location:** In the `record_ranges_of_motion` method. - * **Code:** Wrap the `while True:` loop in a `try-except` block. - ```python - # In the record_ranges_of_motion method - while True: - try: # <-- ADD THIS LINE - positions = self.sync_read("Present_Position", motors, normalize=False) - mins = {m: min(mins[m], positions[m]) for m in motors} - maxs = {m: max(maxs[m], positions[m]) for m in motors} - if display_values: - # print motor positions - ... - if user_pressed_enter: - break - except Exception as e: # <-- ADD THIS LINE - logger.error(f"Error reading positions: {e}") # <-- ADD THIS LINE - continue # <-- ADD THIS LINE - ``` - -#### Issue 3: `config.json` or `model.safetensors` Not Found -* **Problem:** When running inference, the script may fail with `FileNotFoundError: config.json not found on the HuggingFace Hub` because it doesn't look inside the `pretrained_model` subfolder by default. -* **Fix:** Modify the `from_pretrained` method to include the subfolder when downloading files. - * **File:** `lerobot/common/policies/pretrained.py` - * **Location:** In the `from_pretrained` class method. - * **Code:** Add the `subfolder` argument to both `hf_hub_download` calls. - ```python - # In the from_pretrained method - try: - # Download the config file and instantiate the policy. - config_file = hf_hub_download( - repo_id=model_id, - filename=CONFIG_NAME, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - token=token, - local_files_only=local_files_only, - subfolder="pretrained_model", # <-- ADD THIS LINE - ) - # ... - # ... - try: - # Download the model file. - model_file = hf_hub_download( - repo_id=model_id, - filename=SAFETENSORS_SINGLE_FILE, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - token=token, - local_files_only=local_files_only, - subfolder="pretrained_model", # <-- ADD THIS LINE - ) - ``` \ No newline at end of file diff --git a/mintlify/mint.json b/mintlify/mint.json deleted file mode 100644 index 3176a77..0000000 --- a/mintlify/mint.json +++ /dev/null @@ -1,179 +0,0 @@ -{ - "metadata": { - "og:site_name": "docs.phospho.ai", - "og:title": "phospho starter pack documentation", - "og:description": "The toolkit for AI robotics", - "og:url": "https://docs.phospho.ai", - "og:image": "/logo/light.svg", - "og:logo": "/logo/logo.svg", - "twitter:title": "phospho starter pack documentation", - "twitter:description": "The toolkit for AI robotics", - "twitter:url": "https://docs.phospho.ai", - "twitter:image": "/logo/light.svg", - "twitter:site": "@phospho_ai" - }, - "$schema": "https://mintlify.com/schema.json", - "name": "phospho", - "favicon": "/favicon.svg", - "colors": { - "primary": "#00FF00", - "light": "#00FF00", - "dark": "#1E1E1E", - "anchors": { - "from": "#00FF00", - "to": "#07C983" - } - }, - "modeToggle": { - "default": "dark", - "isHidden": true - }, - "topbarLinks": [ - { - "name": "Contact us", - "url": "mailto:contact@phospho.ai" - } - ], - "topbarCtaButton": { - "name": "Get hardware", - "url": "https://robots.phospho.ai?utm_source=docs" - }, - "tabs": [ - { - "name": "phospho pro", - "url": "https://phospho.ai/pro?utm_source=docs" - }, - { - "name": "Github", - "url": "https://github.com/phospho-app/phosphobot" - } - ], - "navigation": [ - { - "group": "Getting Started", - "pages": ["welcome", "installation"] - }, - { - "group": "phosphobot Basic Usage", - "pages": [ - "basic-usage/teleop", - "basic-usage/dataset-recording", - "basic-usage/dataset-operations", - "basic-usage/training", - "basic-usage/inference", - "examples/teleop" - ] - }, - { - "group": "Learn about AI and robotics", - "pages": [ - "learn/overview", - "learn/kinematics", - "learn/lerobot-dataset", - "learn/improve-robotics-ai-model", - "learn/ai-models", - "learn/train-smolvla", - "learn/policies", - "learn/gravity-compensation", - "learn/cameras" - ] - }, - { - "group": "Hardware", - "pages": [ - "unboxings/dk2", - "so-101/quickstart", - "so-100/quickstart", - "unboxings/dk1" - ] - }, - - { - "group": "API Reference", - "pages": [ - { - "group": "Control", - "icon": "robot", - "pages": [ - "control/calibration-sequence", - "control/move-init", - "control/move-teleoperation", - "control/move-teleoperation-ws", - "control/move-absolute-position", - "control/move-relative-position", - "control/move-leader-start", - "control/move-leader-stop", - "control/end-effector-state", - "control/read-torques", - "control/turn-torque", - "control/read-joints", - "control/write-joints", - "control/read-temperature", - "control/write-temperature" - ] - }, - { - "group": "Recording", - "icon": "database", - "pages": [ - "recording/start-recording-episode", - "recording/stop-recording-episode", - "recording/play-recording" - ] - }, - { - "group": "AI model", - "icon": "brain", - "pages": [ - "ai-training/start-training", - "ai-training/cancel-training", - "ai-training/ai-control-start", - "ai-training/ai-control-stop" - ] - }, - { - "group": "Camera", - "icon": "camera", - "pages": [ - "camera/video-feed-for-camera", - "camera/frames", - "camera/cameras-refresh" - ] - } - ] - }, - - { - "group": "Examples", - "pages": [ - "examples/control", - "examples/vision", - "examples/teleop-from-anywhere", - "examples/mcp-for-robotics" - ] - }, - { - "group": "Other", - "pages": ["faq"] - } - ], - "footerSocials": { - "twitter": "https://twitter.com/phospho_ai", - "github": "https://github.com/phospho-app/phosphobot", - "youtube": "https://www.youtube.com/@phospho_ai", - "website": "https://robots.phospho.ai?utm_source=docs", - "linkedin": "https://www.linkedin.com/company/phospho-app/", - "discord": "https://discord.gg/cbkggY6NSK" - }, - "openapi": "openapi.yml", - "analytics": { - "posthog": { - "apiKey": "phc_EesFKS4CVoyc0URzJN0FOETpg7KipCBEpRvHEvv5mDF", - "apiHost": "https://app.posthog.com" - } - }, - "feedback": { - "thumbsRating": true, - "raiseIssue": true - } -} diff --git a/mintlify/openapi.yml b/mintlify/openapi.yml deleted file mode 100644 index fb2dbf6..0000000 --- a/mintlify/openapi.yml +++ /dev/null @@ -1,4990 +0,0 @@ -components: - schemas: - AIControlStatusResponse: - additionalProperties: true - description: Response when starting the AI control. - properties: - ai_control_signal_id: - title: Ai Control Signal Id - type: string - ai_control_signal_status: - enum: - - stopped - - running - - paused - - waiting - title: Ai Control Signal Status - type: string - message: - anyOf: - - type: string - - type: 'null' - title: Message - server_info: - anyOf: - - $ref: '#/components/schemas/ServerInfoResponse' - - type: 'null' - status: - default: ok - enum: - - ok - - error - title: Status - type: string - required: - - ai_control_signal_id - - ai_control_signal_status - title: AIControlStatusResponse - type: object - AIStatusResponse: - description: Response to the AI status request. - properties: - id: - anyOf: - - type: string - - type: 'null' - description: ID of the AI control session. - title: Id - status: - description: Status of the AI control - enum: - - stopped - - running - - paused - - waiting - title: Status - type: string - required: - - status - - id - title: AIStatusResponse - type: object - AddZMQCameraRequest: - description: Request model for adding a ZMQ camera feed. - properties: - tcp_address: - description: 'TCP address of the ZMQ publisher. Format: ''tcp://:''.' - examples: - - tcp://localhost:5555 - title: Tcp Address - type: string - topic: - anyOf: - - type: string - - type: 'null' - description: Topic to subscribe to. If None, will subscribes to all messages - on the given TCP address. - examples: - - cabin_view - - wrist_camera - title: Topic - required: - - tcp_address - title: AddZMQCameraRequest - type: object - AdminSettingsRequest: - description: Contains the admin settings - properties: - cameras_to_record: - anyOf: - - items: - type: integer - type: array - - type: 'null' - title: Cameras To Record - dataset_name: - title: Dataset Name - type: string - episode_format: - title: Episode Format - type: string - freq: - title: Freq - type: integer - hf_private_mode: - default: false - title: Hf Private Mode - type: boolean - task_instruction: - title: Task Instruction - type: string - video_codec: - enum: - - avc1 - - hev1 - - mp4v - - hvc1 - - avc3 - - av01 - - vp09 - - av1 - title: Video Codec - type: string - video_size: - items: - type: integer - title: Video Size - type: array - required: - - dataset_name - - episode_format - - freq - - video_codec - - video_size - - task_instruction - title: AdminSettingsRequest - type: object - AdminSettingsResponse: - description: Contains the settings returned in the admin page - properties: - cameras_to_record: - anyOf: - - items: - type: integer - type: array - - type: 'null' - title: Cameras To Record - dataset_name: - title: Dataset Name - type: string - episode_format: - title: Episode Format - type: string - freq: - title: Freq - type: integer - hf_private_mode: - title: Hf Private Mode - type: boolean - task_instruction: - title: Task Instruction - type: string - video_codec: - enum: - - avc1 - - hev1 - - mp4v - - hvc1 - - avc3 - - av01 - - vp09 - - av1 - title: Video Codec - type: string - video_size: - items: - type: integer - title: Video Size - type: array - required: - - dataset_name - - freq - - episode_format - - video_codec - - video_size - - task_instruction - - cameras_to_record - - hf_private_mode - title: AdminSettingsResponse - type: object - AdminSettingsTokenResponse: - description: 'To each provider is assigned a bool, which is True - - if the token is set and valid.' - properties: - huggingface: - default: false - title: Huggingface - type: boolean - wandb: - default: false - title: Wandb - type: boolean - title: AdminSettingsTokenResponse - type: object - AllCamerasStatus: - description: Description of the status of all cameras. Use this to know which - camera to stream. - properties: - cameras_status: - items: - $ref: '#/components/schemas/SingleCameraStatus' - title: Cameras Status - type: array - is_stereo_camera_available: - default: false - description: Whether a stereoscopic camera is available. - title: Is Stereo Camera Available - type: boolean - realsense_available: - default: false - description: Whether a RealSense camera is available. - title: Realsense Available - type: boolean - video_cameras_ids: - description: List of camera ids that are video cameras. - items: - type: integer - title: Video Cameras Ids - type: array - title: AllCamerasStatus - type: object - AppControlData: - description: Type of data sent by the Metaquest app. - properties: - direction_x: - default: 0.0 - description: Direction vector X, normalized between -1 (left) and 1 (right) - maximum: 1.0 - minimum: -1.0 - title: Direction X - type: number - direction_y: - default: 0.0 - description: Direction vector Y, normalized between -1 (backward) and 1 - (forward) - maximum: 1.0 - minimum: -1.0 - title: Direction Y - type: number - open: - description: 0 for closed, 1 for open - title: Open - type: number - rx: - description: Absolute Pitch in degrees - title: Rx - type: number - ry: - description: Absolute Yaw in degrees - title: Ry - type: number - rz: - description: Absolute Roll in degrees - title: Rz - type: number - source: - default: right - description: Which hand the data comes from. Can be left or right. - enum: - - left - - right - title: Source - type: string - timestamp: - anyOf: - - type: number - - type: 'null' - description: Unix timestamp with milliseconds - title: Timestamp - x: - title: X - type: number - y: - title: Y - type: number - z: - title: Z - type: number - required: - - x - - y - - z - - rx - - ry - - rz - - open - title: AppControlData - type: object - AuthResponse: - properties: - authenticated: - title: Authenticated - type: boolean - is_pro_user: - anyOf: - - type: boolean - - type: 'null' - title: Is Pro User - session: - anyOf: - - $ref: '#/components/schemas/Session' - - type: 'null' - required: - - authenticated - title: AuthResponse - type: object - BaseRobotConfig: - description: Calibration configuration for a robot - properties: - gripping_threshold: - default: 80 - description: Torque threshold to consider an object gripped. This will block - the gripper position and prevent it from moving further. - exclusiveMinimum: 0.0 - title: Gripping Threshold - type: integer - name: - title: Name - type: string - non_gripping_threshold: - default: 10 - description: Torque threshold to consider an object not gripped. This will - allow the gripper to move freely. - exclusiveMinimum: 0.0 - title: Non Gripping Threshold - type: integer - pid_gains: - items: - $ref: '#/components/schemas/BaseRobotPIDGains' - title: Pid Gains - type: array - servos_calibration_position: - items: - type: number - title: Servos Calibration Position - type: array - servos_offsets: - items: - type: number - title: Servos Offsets - type: array - servos_offsets_signs: - items: - type: number - title: Servos Offsets Signs - type: array - servos_voltage: - title: Servos Voltage - type: number - required: - - name - - servos_voltage - - servos_calibration_position - title: BaseRobotConfig - type: object - BaseRobotPIDGains: - description: PID gains for servo motors - properties: - d_gain: - title: D Gain - type: number - i_gain: - title: I Gain - type: number - p_gain: - title: P Gain - type: number - required: - - p_gain - - i_gain - - d_gain - title: BaseRobotPIDGains - type: object - BrowserFilesRequest: - description: Request to browse files in a directory. - properties: - path: - title: Path - type: string - required: - - path - title: BrowserFilesRequest - type: object - CalibrateResponse: - description: Response from the calibration endpoint. - properties: - calibration_status: - description: Status of the calibration. Ends when status is success or error. - enum: - - error - - success - - in_progress - title: Calibration Status - type: string - current_step: - title: Current Step - type: integer - message: - title: Message - type: string - total_nb_steps: - title: Total Nb Steps - type: integer - required: - - calibration_status - - message - - current_step - - total_nb_steps - title: CalibrateResponse - type: object - CancelTrainingRequest: - properties: - training_id: - description: ID of the training to cancel. - title: Training Id - type: integer - required: - - training_id - title: CancelTrainingRequest - type: object - ChatRequest: - description: Control the robot with a natural language prompt. - properties: - chat_id: - description: Unique identifier for the chat session. If not provided, a - new UUID will be generated. - title: Chat Id - type: string - command_history: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: List of previous commands to provide context for the chat. - title: Command History - images: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: 'base64 encoded images to be sent with the request. ' - title: Images - prompt: - description: The task to be performed by the robot, described in natural - language. - title: Prompt - type: string - required: - - prompt - title: ChatRequest - type: object - ChatResponse: - description: Response to the chat request. - properties: - command: - anyOf: - - type: string - - type: 'null' - description: The command to be executed by the robot, generated from the - prompt. - title: Command - endpoint: - anyOf: - - type: string - - type: 'null' - description: The endpoint to call. - title: Endpoint - endpoint_params: - anyOf: - - additionalProperties: true - type: object - - type: 'null' - description: Parameters to pass to the endpoint. - title: Endpoint Params - required: - - command - title: ChatResponse - type: object - ConfirmRequest: - properties: - access_token: - title: Access Token - type: string - refresh_token: - title: Refresh Token - type: string - required: - - access_token - - refresh_token - title: ConfirmRequest - type: object - CustomTrainingRequest: - properties: - custom_command: - description: Will run this custom command as a subprocess when pressing - the train button. - title: Custom Command - type: string - required: - - custom_command - title: CustomTrainingRequest - type: object - DatasetListResponse: - description: List of datasets - properties: - local_datasets: - items: - type: string - title: Local Datasets - type: array - pushed_datasets: - items: - type: string - title: Pushed Datasets - type: array - required: - - pushed_datasets - - local_datasets - title: DatasetListResponse - type: object - DatasetRepairRequest: - properties: - dataset_path: - description: Path to the dataset to repair - examples: - - /lerobot_v2.1/example_dataset - title: Dataset Path - type: string - required: - - dataset_path - title: DatasetRepairRequest - type: object - DatasetShuffleRequest: - properties: - dataset_path: - description: Path to the dataset to shuffle - examples: - - /lerobot_v2.1/example_dataset - title: Dataset Path - type: string - required: - - dataset_path - title: DatasetShuffleRequest - type: object - DatasetSplitRequest: - properties: - dataset_path: - description: Path to the dataset to split - examples: - - /lerobot_v2.1/example_dataset - title: Dataset Path - type: string - first_split_name: - description: Name of the first split. - examples: - - /lerobot_v2.1/example_dataset_training - title: First Split Name - type: string - second_split_name: - description: Name of the second split. - examples: - - /lerobot_v2.1/example_dataset_validation - title: Second Split Name - type: string - split_ratio: - default: 0.8 - description: Ratio of the dataset to use for the first split. The second - split will use the rest of the dataset. - maximum: 1.0 - minimum: 0.0 - title: Split Ratio - type: number - required: - - dataset_path - - first_split_name - - second_split_name - title: DatasetSplitRequest - type: object - DeleteEpisodeRequest: - description: Request to delete an episode. - properties: - episode_id: - title: Episode Id - type: integer - path: - title: Path - type: string - required: - - path - - episode_id - title: DeleteEpisodeRequest - type: object - EndEffectorPosition: - description: 'End effector position for a movement in absolute frame. - - All zeros means the initial position, that you get by calling /move/init' - properties: - open: - description: 0 for closed, 1 for open - title: Open - type: number - rx: - anyOf: - - type: number - - type: 'null' - description: Absolute Pitch in degrees - title: Rx - ry: - anyOf: - - type: number - - type: 'null' - description: Absolute Yaw in degrees - title: Ry - rz: - anyOf: - - type: number - - type: 'null' - description: Absolute Roll in degrees - title: Rz - x: - anyOf: - - type: number - - type: 'null' - description: X position in centimeters - title: X - y: - anyOf: - - type: number - - type: 'null' - description: Y position in centimeters - title: Y - z: - anyOf: - - type: number - - type: 'null' - description: Z position in centimeters - title: Z - required: - - x - - y - - z - - rx - - ry - - rz - - open - title: EndEffectorPosition - type: object - EndEffectorReadRequest: - properties: - only_gripper: - default: false - description: If True, only return the gripper state. If False, return the - full end effector position and orientation. - title: Only Gripper - type: boolean - sync: - default: false - description: If True, the simulation will first read the motor positions, - synchronize them with the simulated robot, and then return the end effector - position.Useful for measurements, however it will take more time to respond. - title: Sync - type: boolean - title: EndEffectorReadRequest - type: object - FeedbackRequest: - properties: - ai_control_id: - description: ID of the AI control session. - title: Ai Control Id - type: string - feedback: - description: Feedback on the AI control. Can be positive or negative. - enum: - - positive - - negative - title: Feedback - type: string - required: - - feedback - - ai_control_id - title: FeedbackRequest - type: object - ForgotPasswordRequest: - properties: - email: - title: Email - type: string - required: - - email - title: ForgotPasswordRequest - type: object - HFDownloadDatasetRequest: - properties: - dataset_name: - title: Dataset Name - type: string - required: - - dataset_name - title: HFDownloadDatasetRequest - type: object - HFWhoamIResponse: - additionalProperties: true - properties: - message: - anyOf: - - type: string - - type: 'null' - title: Message - status: - default: ok - enum: - - ok - - error - title: Status - type: string - username: - anyOf: - - type: string - - type: 'null' - title: Username - title: HFWhoamIResponse - type: object - HTTPValidationError: - properties: - detail: - items: - $ref: '#/components/schemas/ValidationError' - title: Detail - type: array - title: HTTPValidationError - type: object - HuggingFaceTokenRequest: - description: Hugging Face token saved by the user. - properties: - token: - title: Token - type: string - required: - - token - title: HuggingFaceTokenRequest - type: object - InfoResponse: - description: Response to the /dataset/info endpoint. - properties: - image_frames: - anyOf: - - additionalProperties: - type: string - type: object - - type: 'null' - title: Image Frames - image_keys: - anyOf: - - items: - type: string - type: array - - type: 'null' - title: Image Keys - number_of_episodes: - anyOf: - - type: integer - - type: 'null' - title: Number Of Episodes - robot_dof: - anyOf: - - type: integer - - type: 'null' - title: Robot Dof - robot_type: - anyOf: - - type: string - - type: 'null' - title: Robot Type - status: - default: ok - enum: - - ok - - error - title: Status - type: string - title: InfoResponse - type: object - JointsReadRequest: - description: Request to read the joints of the robot. - properties: - joints_ids: - anyOf: - - items: - type: integer - type: array - - type: 'null' - description: If set, only read the joints with these ids. If None, read - all joints. - title: Joints Ids - source: - default: robot - description: Source of the joint angles. 'sim' means the angles are read - from the simulation, 'robot' means the angles are read from the hardware. - enum: - - sim - - robot - title: Source - type: string - unit: - default: rad - description: The unit of the angles. Defaults to radian. - enum: - - rad - - motor_units - - degrees - title: Unit - type: string - title: JointsReadRequest - type: object - JointsReadResponse: - description: Response to read the joints of the robot. - properties: - angles: - description: A list of length 6, with the position of each joint in the - unit specified in the request. If a joint is not available, its value - will be None. - items: - anyOf: - - type: number - - type: 'null' - title: Angles - type: array - unit: - default: rad - description: The unit of the angles. Defaults to radian. - enum: - - rad - - motor_units - - degrees - title: Unit - type: string - required: - - angles - title: JointsReadResponse - type: object - JointsWriteRequest: - description: Request to set the joints of the robot. - properties: - angles: - description: A list with the position of each joint. The length of the list - must be equal to the number of joints. The unit is given by the 'unit' - field. - items: - type: number - title: Angles - type: array - joints_ids: - anyOf: - - items: - type: integer - type: array - - type: 'null' - description: 'If set, only set the joints with these ids. If None, set all - joints.Example: ''angles''=[1,1,1], ''joints_ids''=[0,1,2] will set the - first 3 joints to 1 radian.' - title: Joints Ids - unit: - default: rad - description: The unit of the angles. Defaults to radian. - enum: - - rad - - motor_units - - degrees - title: Unit - type: string - required: - - angles - title: JointsWriteRequest - type: object - LocalDevice: - properties: - device: - title: Device - type: string - interface: - anyOf: - - type: string - - type: 'null' - title: Interface - name: - title: Name - type: string - pid: - anyOf: - - type: integer - - type: 'null' - title: Pid - serial_number: - anyOf: - - type: string - - type: 'null' - title: Serial Number - required: - - name - - device - title: LocalDevice - type: object - LoginCredentialsRequest: - properties: - email: - title: Email - type: string - password: - title: Password - type: string - required: - - email - - password - title: LoginCredentialsRequest - type: object - MergeDatasetsRequest: - properties: - first_dataset: - description: Path to the first dataset to merge - examples: - - /lerobot_v2.1/example_dataset - title: First Dataset - type: string - image_key_mappings: - additionalProperties: - type: string - description: Mapping of the image keys from the first dataset to the second - dataset. - examples: - - context_camera: context_camera_2 - wrist_camera: wrist_camera_2 - title: Image Key Mappings - type: object - new_dataset_name: - description: Name of the new dataset to create - examples: - - /lerobot_v2.1/example_dataset_merged - title: New Dataset Name - type: string - second_dataset: - description: Path to the second dataset to merge - examples: - - /lerobot_v2.1/example_dataset_to_merge_with - title: Second Dataset - type: string - required: - - first_dataset - - second_dataset - - new_dataset_name - - image_key_mappings - title: MergeDatasetsRequest - type: object - ModelConfigurationRequest: - properties: - model_id: - description: Hugging Face model id to use - examples: - - PLB/GR00T-N1-lego-pickup-mono-2 - pattern: ^\s*\S.*$ - title: Model Id - type: string - model_type: - description: Type of model to use. - enum: - - gr00t - - ACT - - ACT_BBOX - title: Model Type - type: string - required: - - model_id - - model_type - title: ModelConfigurationRequest - type: object - ModelConfigurationResponse: - properties: - checkpoints: - description: List of available checkpoints for the model. - examples: - - - '100' - - '500' - items: - type: string - title: Checkpoints - type: array - video_keys: - description: List of video keys for the model. These are the keys used to - access the videos in the dataset. - examples: - - - video_0 - - video_1 - items: - type: string - title: Video Keys - type: array - required: - - video_keys - title: ModelConfigurationResponse - type: object - MoveAbsoluteRequest: - description: 'Move the robot to an absolute position. All zeros means the initial - position, - - that you get by calling /move/init.' - properties: - max_trials: - default: 10 - description: The maximum number of trials to reach the target position. - minimum: 1.0 - title: Max Trials - type: integer - open: - anyOf: - - type: number - - type: 'null' - description: 0 for closed, 1 for open - title: Open - orientation_tolerance: - default: 0.2 - description: Increase max_trials and decrease tolerance to get more precision.Orientation - tolerance is the euclidean distance between the target and the current - orientation. - minimum: 0.0 - title: Orientation Tolerance - type: number - position_tolerance: - default: 0.03 - description: Increase max_trials and decrease tolerance to get more precision.Position - tolerance is the euclidean distance between the target and the current - position. - minimum: 0.0 - title: Position Tolerance - type: number - rx: - anyOf: - - type: number - - type: 'null' - description: Absolute Pitch in degrees. If None, inverse kinematics will - be used to calculate the best position. - title: Rx - ry: - anyOf: - - type: number - - type: 'null' - description: Absolute Yaw in degrees. If None, inverse kinematics will be - used to calculate the best position. - title: Ry - rz: - anyOf: - - type: number - - type: 'null' - description: Absolute Roll in degrees. If None, inverse kinematics will - be used to calculate the best position. - title: Rz - x: - anyOf: - - type: number - - type: 'null' - description: X position in centimeters - title: X - y: - anyOf: - - type: number - - type: 'null' - description: Y position in centimeters - title: Y - z: - anyOf: - - type: number - - type: 'null' - description: Z position in centimeters - title: Z - title: MoveAbsoluteRequest - type: object - NetworkCredentials: - properties: - password: - title: Password - type: string - ssid: - title: Ssid - type: string - required: - - ssid - - password - title: NetworkCredentials - type: object - NetworkDevice: - properties: - ip: - title: Ip - type: string - mac: - title: Mac - type: string - required: - - ip - - mac - title: NetworkDevice - type: object - RecordingPlayRequest: - description: Request to play a recorded episode. - examples: - - dataset_name: example_dataset - episode_id: 0 - - episode_path: ~/phosphobot/lerobot_v2/example_dataset/chunk-000/episode_000000.json - replicate: false - robot_id: - - 0 - - 1 - properties: - dataset_format: - default: lerobot_v2.1 - description: Format of the dataset to play. This is used to determine how - to read the episode data. - enum: - - lerobot_v2 - - lerobot_v2.1 - title: Dataset Format - type: string - dataset_name: - anyOf: - - type: string - - type: 'null' - description: Name of the dataset to play the episode from. If None, defaults - to the last dataset recorded. - examples: - - example_dataset - title: Dataset Name - episode_id: - anyOf: - - type: integer - - type: 'null' - description: ID of the episode to play. If a dataset_name is specified but - episode_id is None, plays the last episode recorded of this dataset. If - dataset_name is None, this is ignored and plays the last episode recorded. - examples: - - 0 - title: Episode Id - episode_path: - anyOf: - - type: string - - type: 'null' - description: (Optional) If you recorded your data with LeRobot v2 compatible - format, you can directly specifiy the path to the .parquet file of the - episode to play. If specified, you don't have to pass a dataset_name or - episode_id. - examples: - - ~/phosphobot/lerobot_v2/example_dataset/chunk-000/episode_000000.json - title: Episode Path - interpolation_factor: - default: 4 - description: Smoothen the playback by interpolating between frames. 1 means - no interpolation, 2 means 1 frame every 2 frames, etc. 4 is the recommended - value. - minimum: 1.0 - title: Interpolation Factor - type: integer - playback_speed: - default: 1.0 - description: Speed of the playback. 1.0 is normal speed, 0.5 is half speed, - 2.0 is double speed. High speed may cause the robot to break. - minimum: 0.0 - title: Playback Speed - type: number - replicate: - default: true - description: 'If False and there are more robots than number of robots in - the episode, extra robots will not move. If True, all the extras robots - will replicate movements of the robots in the episode.Examples: If there - are 4 robots and the episode has 2 robots, if replicate is True, robot - 3 and 4 will replicate the movements of robot 1 and 2. If replicate is - False, robot 3 and 4 will not move.' - title: Replicate - type: boolean - robot_id: - anyOf: - - type: integer - - items: - type: integer - type: array - - type: 'null' - description: ID of the robot to play the episode on. If None, plays on all - robots. If a list, plays on the robots with the given IDs. - examples: - - 0 - - - 0 - - 1 - title: Robot Id - robot_serials_to_ignore: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: List of robot serial ids to ignore. If set to None, plays on - all available robots. - examples: - - - /dev/ttyUSB0 - title: Robot Serials To Ignore - title: RecordingPlayRequest - type: object - RecordingStartRequest: - description: Request to start the recording of an episode. - properties: - add_metadata: - anyOf: - - additionalProperties: - items: {} - type: array - type: object - - type: 'null' - description: Passing a dictionnary will store the value in each row of the - recorded dataset. The key is the name of the column, and the value is - a list. If set to None, no additional metadata is saved. - examples: - - bbox_position: - - 0.5 - - 1.0 - - 0.0 - - 0.5 - title: Add Metadata - branch_path: - anyOf: - - type: string - - type: 'null' - description: Path to the branch to push the dataset to, in addition to the - main branch. If set to None, only push to the main branch. Defaults to - None. - title: Branch Path - cameras_ids_to_record: - anyOf: - - items: - type: integer - type: array - - type: 'null' - description: List of camera ids to record. If set to None, records all available - cameras. - examples: - - - 0 - - 1 - title: Cameras Ids To Record - dataset_name: - anyOf: - - type: string - - type: 'null' - description: Name of the dataset to save the episode in.If None, defaults - to the value set in Admin Configuration. - examples: - - example_dataset - title: Dataset Name - enable_rerun_visualization: - default: false - description: Enable rerun - title: Enable Rerun Visualization - type: boolean - episode_format: - anyOf: - - enum: - - json - - lerobot_v2 - - lerobot_v2.1 - type: string - - type: 'null' - description: 'Format to save the episode. - - `json` is compatible with OpenVLA and stores videos as a series of npy. - - `lerobot_v2` is compatible with [lerobot training.](https://docs.phospho.ai/learn/ai-models).If - None, defaults to the value set in Admin Configuration.' - examples: - - lerobot_v2.1 - title: Episode Format - freq: - anyOf: - - type: integer - - type: 'null' - description: Records steps of the robot at this frequency.If None, defaults - to the value set in Admin Configuration. - examples: - - 30 - title: Freq - instruction: - anyOf: - - type: string - - type: 'null' - description: A text describing the recorded task. If set to None, defaults - to the value set in Admin Configuration. - examples: - - Pick up the orange brick and put it in the black box. - title: Instruction - leader_arm_ids: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: Serial numbers of the leader arms used during the recording - examples: - - - /dev/ttyUSB0 - title: Leader Arm Ids - robot_serials_to_ignore: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: List of robot serial ids to ignore. If set to None, records - all available robots. - examples: - - - /dev/ttyUSB0 - title: Robot Serials To Ignore - save_cartesian: - default: false - description: Record cartesian positions of the robots as well, this will - make your dataset incompatible with lerobot and it only works for robots - with simulators. Defaults to False. - title: Save Cartesian - type: boolean - target_video_size: - anyOf: - - maxItems: 2 - minItems: 2 - prefixItems: - - type: integer - - type: integer - type: array - - type: 'null' - description: Target video size for the recording, all videos in the dataset - should have the same size. If set to None, defaults to the value set in - Admin Configuration. - examples: - - - 320 - - 240 - title: Target Video Size - video_codec: - anyOf: - - enum: - - avc1 - - hev1 - - mp4v - - hvc1 - - avc3 - - av01 - - vp09 - - av1 - type: string - - type: 'null' - description: Codec to use for the video saving.If None, defaults to the - value set in Admin Configuration. - examples: - - avc1 - title: Video Codec - title: RecordingStartRequest - type: object - RecordingStopRequest: - description: Request to stop the recording of the episode. - properties: - save: - default: true - description: Whether to save the episode to disk. Defaults to True. - title: Save - type: boolean - title: RecordingStopRequest - type: object - RecordingStopResponse: - description: Response when the recording is stopped. The episode is saved in - the given path. - properties: - episode_folder_path: - anyOf: - - type: string - - type: 'null' - description: Path to the folder where the episode is saved. - title: Episode Folder Path - episode_index: - anyOf: - - type: integer - - type: 'null' - description: Index of the recorded episode in the dataset. - title: Episode Index - required: - - episode_folder_path - - episode_index - title: RecordingStopResponse - type: object - RelativeEndEffectorPosition: - description: 'Relative end effector position for a movement in relative frame. - - Useful for OpenVLA-like control.' - properties: - open: - anyOf: - - type: number - - type: 'null' - description: 0 for closed, 1 for open. If None, use the last value. - title: Open - rx: - anyOf: - - type: number - - type: 'null' - description: Relative Pitch in degrees - title: Rx - ry: - anyOf: - - type: number - - type: 'null' - description: Relative Yaw in degrees - title: Ry - rz: - anyOf: - - type: number - - type: 'null' - description: Relative Roll in degrees - title: Rz - x: - anyOf: - - type: number - - type: 'null' - description: Delta X position in centimeters - title: X - y: - anyOf: - - type: number - - type: 'null' - description: Delta Y position in centimeters - title: Y - z: - anyOf: - - type: number - - type: 'null' - description: Delta Z position in centimeters - title: Z - title: RelativeEndEffectorPosition - type: object - ResetPasswordRequest: - properties: - access_token: - description: Access token from the reset email - title: Access Token - type: string - new_password: - description: New password to set for the user - title: New Password - type: string - refresh_token: - description: Refresh token from the reset email - title: Refresh Token - type: string - required: - - access_token - - refresh_token - - new_password - title: ResetPasswordRequest - type: object - RobotConfigResponse: - description: Response model for robot configuration. - properties: - config: - anyOf: - - $ref: '#/components/schemas/BaseRobotConfig' - - type: 'null' - gripper_joint_index: - anyOf: - - type: integer - - type: 'null' - title: Gripper Joint Index - name: - title: Name - type: string - resolution: - default: 4096 - title: Resolution - type: integer - robot_id: - title: Robot Id - type: integer - servo_ids: - items: - type: integer - title: Servo Ids - type: array - required: - - robot_id - - name - - config - title: RobotConfigResponse - type: object - RobotConfigStatus: - description: Contains the configuration of a robot. - properties: - device_name: - anyOf: - - type: string - - type: 'null' - title: Device Name - name: - title: Name - type: string - robot_type: - default: manipulator - enum: - - manipulator - - mobile - - other - title: Robot Type - type: string - temperature: - anyOf: - - items: - $ref: '#/components/schemas/Temperature' - type: array - - type: 'null' - title: Temperature - required: - - name - - device_name - title: RobotConfigStatus - type: object - RobotConnectionRequest: - description: Request to manually connect to a robot. - properties: - connection_details: - additionalProperties: true - description: Connection details for the robot. These are passed to the class - constructor. This can include IP address, port, and other connection parameters. - title: Connection Details - type: object - robot_name: - description: Type of the robot to connect to. - examples: - - so-100 - - wx-250s - - koch-v1.1 - title: Robot Name - type: string - required: - - robot_name - - connection_details - title: RobotConnectionRequest - type: object - RobotConnectionResponse: - additionalProperties: true - properties: - message: - anyOf: - - type: string - - type: 'null' - title: Message - robot_id: - title: Robot Id - type: integer - status: - default: ok - enum: - - ok - - error - title: Status - type: string - required: - - robot_id - title: RobotConnectionResponse - type: object - RobotPairRequest: - description: Represents a pair of robots for leader-follower control. - properties: - follower_id: - anyOf: - - type: integer - - type: 'null' - description: Serial number of the follower robot - title: Follower Id - leader_id: - anyOf: - - type: integer - - type: 'null' - description: Serial number of the leader robot - title: Leader Id - required: - - leader_id - - follower_id - title: RobotPairRequest - type: object - ScanDevicesResponse: - description: Response to the USB devices scan request. - properties: - devices: - description: List of connected USB devices. - items: - $ref: '#/components/schemas/LocalDevice' - title: Devices - type: array - required: - - devices - title: ScanDevicesResponse - type: object - ScanNetworkRequest: - description: Request to scan the network for devices. - properties: - robot_name: - anyOf: - - type: string - - type: 'null' - description: Name of the robot to scan for. If None, scans for all devices - on the network. - title: Robot Name - title: ScanNetworkRequest - type: object - ScanNetworkResponse: - description: Response to the network scan request. - properties: - devices: - description: List of devices found on the network. - items: - $ref: '#/components/schemas/NetworkDevice' - title: Devices - type: array - subnet: - anyOf: - - type: string - - type: 'null' - description: Subnet of the network. - examples: - - 192.168.1.1/24 - title: Subnet - required: - - devices - - subnet - title: ScanNetworkResponse - type: object - ServerInfoResponse: - properties: - model_id: - title: Model Id - type: string - port: - title: Port - type: integer - server_id: - title: Server Id - type: integer - tcp_socket: - maxItems: 2 - minItems: 2 - prefixItems: - - type: string - - type: integer - title: Tcp Socket - type: array - timeout: - title: Timeout - type: integer - url: - title: Url - type: string - required: - - server_id - - url - - port - - tcp_socket - - model_id - - timeout - title: ServerInfoResponse - type: object - ServerStatus: - description: Contains the status of the app - properties: - ai_running_status: - default: stopped - description: Whether the robot is currently controlled by an AI model. - enum: - - stopped - - running - - paused - - waiting - title: Ai Running Status - type: string - cameras: - $ref: '#/components/schemas/AllCamerasStatus' - is_recording: - default: false - description: Whether the server is currently recording an episode. - title: Is Recording - type: boolean - leader_follower_status: - default: false - description: Whether the leader-follower control is currently active. - title: Leader Follower Status - type: boolean - name: - title: Name - type: string - robot_status: - items: - $ref: '#/components/schemas/RobotConfigStatus' - title: Robot Status - type: array - robots: - deprecated: true - items: - type: string - title: Robots - type: array - server_ip: - description: IP address of the phosphobot server - examples: - - 192.168.1.X - title: Server Ip - type: string - server_port: - description: Port of the phosphobot server - examples: - - 80 - - 8020 - - 8021 - title: Server Port - type: integer - status: - enum: - - ok - - error - title: Status - type: string - version_id: - default: 0.3.120 - description: Current version of the teleoperation server - title: Version Id - type: string - required: - - status - - name - - server_ip - - server_port - title: ServerStatus - type: object - Session: - description: Session model for storing supabase session details. - properties: - access_token: - title: Access Token - type: string - email_confirmed: - title: Email Confirmed - type: boolean - expires_at: - title: Expires At - type: integer - refresh_token: - title: Refresh Token - type: string - user_email: - title: User Email - type: string - user_id: - title: User Id - type: string - required: - - user_id - - user_email - - email_confirmed - - access_token - - refresh_token - - expires_at - title: Session - type: object - SessionReponse: - description: Response for login/signup - properties: - is_pro_user: - anyOf: - - type: boolean - - type: 'null' - title: Is Pro User - message: - title: Message - type: string - session: - anyOf: - - $ref: '#/components/schemas/Session' - - type: 'null' - required: - - message - title: SessionReponse - type: object - SingleCameraStatus: - properties: - camera_id: - title: Camera Id - type: integer - camera_type: - description: 'Type of camera. - - `classic`: Standard camera detected by OpenCV. - - `stereo`: Stereoscopic camera. It has two lenses: left eye and right eye - to give a 3D effect. The left half of the image is the left eye, and the - right half is the right eye. - - `realsense`: Intel RealSense camera. It use infrared sensors to provide - depth information. It requires a special driver. - - `dummy`: Dummy camera. Used for testing. - - `dummy_stereo`: Dummy stereoscopic camera. Used for testing. - - `unknown`: Unknown camera type.' - enum: - - classic - - stereo - - realsense - - realsense_rgb - - realsense_depth - - dummy - - dummy_stereo - - unknown - - zmq - title: Camera Type - type: string - fps: - title: Fps - type: integer - height: - title: Height - type: integer - is_active: - title: Is Active - type: boolean - width: - title: Width - type: integer - required: - - camera_id - - is_active - - camera_type - - width - - height - - fps - title: SingleCameraStatus - type: object - SpawnStatusResponse: - additionalProperties: true - description: Response to spawn a server. - properties: - message: - anyOf: - - type: string - - type: 'null' - title: Message - server_info: - $ref: '#/components/schemas/ServerInfoResponse' - status: - default: ok - enum: - - ok - - error - title: Status - type: string - required: - - server_info - title: SpawnStatusResponse - type: object - StartAIControlRequest: - description: Request to start the AI control of the robot. - properties: - angle_format: - default: rad - description: Format of the angles used in the model. Can be 'degrees', 'radians', - or 'other'. If other is selected, you will need to specify a min and max - angle value. - enum: - - degrees - - rad - - other - examples: - - rad - title: Angle Format - type: string - cameras_keys_mapping: - anyOf: - - additionalProperties: - type: integer - type: object - - type: 'null' - description: Mapping of the camera keys to the camera ids. If set to None, - use the default mapping based on cameras order. - examples: - - context_camera: 1 - wrist_camera: 0 - title: Cameras Keys Mapping - checkpoint: - anyOf: - - type: integer - - type: 'null' - description: Checkpoint to use for the model. If None, uses the latest checkpoint. - examples: - - 500 - title: Checkpoint - max_angle: - anyOf: - - type: number - - type: 'null' - description: If angle_format is 'other', this is the maximum angle value - used in the model. If None and angle_format is 'other', will raise an - error. - title: Max Angle - min_angle: - anyOf: - - type: number - - type: 'null' - description: If angle_format is 'other', this is the minimum angle value - used in the model. If None and angle_format is 'other', will raise an - error. - title: Min Angle - model_id: - description: Hugging Face model id to use - title: Model Id - type: string - model_type: - description: Type of model to use. Can be gr00t or act. - enum: - - gr00t - - ACT - - ACT_BBOX - title: Model Type - type: string - prompt: - anyOf: - - type: string - - type: 'null' - description: Prompt to be followed by the robot - title: Prompt - robot_serials_to_ignore: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: List of robot serial ids to ignore. If set to None, controls - all available robots. - examples: - - - /dev/ttyUSB0 - title: Robot Serials To Ignore - selected_camera_id: - anyOf: - - type: integer - - type: 'null' - description: Name of the camera to use when ACT_BBOX model is used. This - is only required for ACT_BBOX models, and is ignored for other models. - title: Selected Camera Id - speed: - default: 1.0 - description: Speed of the AI control. 1.0 is normal speed, 0.5 is half speed, - 2.0 is double speed. The highest speed is still bottlenecked by the GPU - inference time. - maximum: 2.0 - minimum: 0.1 - title: Speed - type: number - verify_cameras: - default: true - description: Whether to verify the setup before starting the AI control. - If False, skips the verification step. - title: Verify Cameras - type: boolean - required: - - model_id - - model_type - title: StartAIControlRequest - type: object - StartLeaderArmControlRequest: - description: 'Request to set up leader-follower control. The leader robot will - be controlled by the user, - - and the follower robot will mirror the leader''s movements. - - - You need two robots connected to the same computer to use this feature.' - properties: - enable_gravity_compensation: - default: false - description: Enable gravity compensation for the leader robots - title: Enable Gravity Compensation - type: boolean - gravity_compensation_values: - anyOf: - - additionalProperties: - type: integer - type: object - - type: 'null' - default: - elbow: 50 - shoulder: 100 - wrist: 10 - description: Gravity compensation pourcentage values for shoulder, elbow, - and wrist joints (0-100%) - title: Gravity Compensation Values - invert_controls: - default: false - description: Mirror controls for the follower robots - title: Invert Controls - type: boolean - robot_pairs: - description: List of robot pairs to control. Each pair contains the robot - id of the leader and the corresponding follower. - items: - $ref: '#/components/schemas/RobotPairRequest' - title: Robot Pairs - type: array - required: - - robot_pairs - title: StartLeaderArmControlRequest - type: object - StartServerRequest: - description: Request to start an inference server and get the server info. - properties: - model_id: - description: Hugging Face model id to use - title: Model Id - type: string - model_type: - description: Type of model to use. Can be gr00t or act. - enum: - - gr00t - - ACT - title: Model Type - type: string - robot_serials_to_ignore: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: List of robot serial ids to ignore. If set to None, controls - all available robots. - examples: - - - /dev/ttyUSB0 - title: Robot Serials To Ignore - required: - - model_id - - model_type - title: StartServerRequest - type: object - StartTrainingResponse: - additionalProperties: true - properties: - message: - anyOf: - - type: string - - type: 'null' - title: Message - model_url: - anyOf: - - type: string - - type: 'null' - description: URL to the Hugging Face model card. - title: Model Url - status: - default: ok - enum: - - ok - - error - title: Status - type: string - training_id: - anyOf: - - type: integer - - type: 'null' - description: ID of the training to start. This is the ID returned by the - training request. - title: Training Id - required: - - training_id - title: StartTrainingResponse - type: object - StatusResponse: - additionalProperties: true - description: Default response. May contain other fields. - properties: - message: - anyOf: - - type: string - - type: 'null' - title: Message - status: - default: ok - enum: - - ok - - error - title: Status - type: string - title: StatusResponse - type: object - SupabaseTrainingModel: - additionalProperties: true - properties: - dataset_name: - title: Dataset Name - type: string - id: - title: Id - type: integer - modal_function_call_id: - anyOf: - - type: string - - type: 'null' - title: Modal Function Call Id - model_name: - title: Model Name - type: string - model_type: - title: Model Type - type: string - requested_at: - title: Requested At - type: string - session_count: - default: 0 - title: Session Count - type: integer - status: - enum: - - succeeded - - failed - - running - - canceled - title: Status - type: string - success_rate: - anyOf: - - type: number - - type: 'null' - title: Success Rate - terminated_at: - anyOf: - - type: string - - type: 'null' - title: Terminated At - training_params: - anyOf: - - additionalProperties: true - type: object - - type: 'null' - title: Training Params - used_wandb: - anyOf: - - type: boolean - - type: 'null' - title: Used Wandb - user_id: - title: User Id - type: string - required: - - id - - status - - user_id - - dataset_name - - model_name - - requested_at - - terminated_at - - used_wandb - - model_type - title: SupabaseTrainingModel - type: object - TeleopSettings: - description: Model representing current teleop settings. - properties: - vr_scaling: - description: VR scaling factor for teleoperation control. - examples: - - 1.0 - - 0.5 - - 2.0 - exclusiveMinimum: 0.0 - title: Vr Scaling - type: number - required: - - vr_scaling - title: TeleopSettings - type: object - TeleopSettingsRequest: - description: Request model for updating teleop settings. - properties: - vr_scaling: - description: VR scaling factor for teleoperation control. - examples: - - 1.0 - - 0.5 - - 2.0 - exclusiveMinimum: 0.0 - title: Vr Scaling - type: number - required: - - vr_scaling - title: TeleopSettingsRequest - type: object - Temperature: - properties: - current: - anyOf: - - type: number - - type: 'null' - title: Current - max: - anyOf: - - type: number - - type: 'null' - title: Max - required: - - current - - max - title: Temperature - type: object - TemperatureReadResponse: - description: Response to read the Temperature of the robot. - properties: - current_max_Temperature: - anyOf: - - items: - $ref: '#/components/schemas/Temperature' - type: array - - type: 'null' - description: ' A list of Temperature objects, one for each joint. If the - robot is not connected, this will be None.' - title: Current Max Temperature - required: - - current_max_Temperature - title: TemperatureReadResponse - type: object - TemperatureWriteRequest: - description: Request to set the maximum Temperature for joints of the robot. - properties: - maximum_temperature: - description: A list with the maximum temperature of each joint. The length - of the list must be equal to the number of joints. - items: - type: integer - title: Maximum Temperature - type: array - required: - - maximum_temperature - title: TemperatureWriteRequest - type: object - TorqueControlRequest: - description: Request to control the robot's torque. - properties: - torque_status: - description: Whether to enable or disable torque control. - title: Torque Status - type: boolean - required: - - torque_status - title: TorqueControlRequest - type: object - TorqueReadResponse: - description: Response to read the torque of the robot. - properties: - current_torque: - description: A list of length 6, with the current torque of each joint. - items: - type: number - title: Current Torque - type: array - required: - - current_torque - title: TorqueReadResponse - type: object - TrainingInfoRequest: - properties: - model_id: - anyOf: - - type: string - - type: 'null' - description: Hugging Face model id to get training info - title: Model Id - model_type: - enum: - - gr00t - - ACT - - ACT_BBOX - - custom - title: Model Type - type: string - required: - - model_type - title: TrainingInfoRequest - type: object - TrainingInfoResponse: - properties: - message: - anyOf: - - type: string - - type: 'null' - title: Message - status: - enum: - - ok - - error - title: Status - type: string - training_body: - anyOf: - - additionalProperties: true - type: object - - type: 'null' - title: Training Body - required: - - status - title: TrainingInfoResponse - type: object - TrainingParamsAct: - additionalProperties: true - description: Training parameters are left to None by default and are set depending - on the dataset in the training pipeline. - properties: - batch_size: - anyOf: - - exclusiveMinimum: 0.0 - maximum: 150.0 - type: integer - - type: 'null' - description: Batch size for training, we run this on an A10G. Leave it to - None to auto-detect based on your dataset - title: Batch Size - save_freq: - default: 5000 - description: Number of steps between saving the model. - exclusiveMinimum: 0.0 - maximum: 1000000.0 - title: Save Freq - type: integer - steps: - anyOf: - - exclusiveMinimum: 0.0 - maximum: 1000000.0 - type: integer - - type: 'null' - description: Number of training steps. Leave it to None to auto-detect based - on your dataset - title: Steps - title: TrainingParamsAct - type: object - TrainingParamsActWithBbox: - additionalProperties: true - description: Training parameters for ACT with bounding box - properties: - batch_size: - anyOf: - - exclusiveMinimum: 0.0 - maximum: 150.0 - type: integer - - type: 'null' - description: Batch size for training, we run this on an A10G. Leave it to - None to auto-detect based on your dataset - title: Batch Size - image_key: - default: main - description: Key for the image to run detection on, e.g. 'main' or 'images.main' - examples: - - main - - images.main - minLength: 1 - title: Image Key - type: string - image_keys_to_keep: - description: Optional list of image keys to keep. If none, all image keys - will be dropped. - items: - type: string - title: Image Keys To Keep - type: array - save_freq: - default: 5000 - description: Number of steps between saving the model. - exclusiveMinimum: 0.0 - maximum: 1000000.0 - title: Save Freq - type: integer - steps: - anyOf: - - exclusiveMinimum: 0.0 - maximum: 1000000.0 - type: integer - - type: 'null' - description: Number of training steps. Leave it to None to auto-detect based - on your dataset - title: Steps - target_detection_instruction: - default: 'e.g.: green lego brick, red ball, blue plushy...' - description: Instruction for the target object to detect, e.g. 'red/orange - lego brick' - examples: - - red/orange lego brick - - brown plushy - - blue ball - minLength: 4 - title: Target Detection Instruction - type: string - title: TrainingParamsActWithBbox - type: object - TrainingParamsGr00T: - additionalProperties: true - properties: - batch_size: - anyOf: - - exclusiveMinimum: 0.0 - maximum: 128.0 - type: integer - - type: 'null' - default: 64 - description: Batch size for training. Decrease it if you get an Out Of Memory - (OOM) error - title: Batch Size - data_dir: - default: data/ - description: The directory to save the dataset to - title: Data Dir - type: string - learning_rate: - default: 0.0001 - description: Learning rate for training. - exclusiveMinimum: 0.0 - maximum: 1.0 - title: Learning Rate - type: number - num_epochs: - default: 10 - description: Number of epochs to train for. - exclusiveMinimum: 0.0 - maximum: 100.0 - title: Num Epochs - type: integer - output_dir: - default: outputs/ - description: The directory to save the model to - title: Output Dir - type: string - save_steps: - default: 1000 - description: Number of steps between saving the model. - exclusiveMinimum: 0.0 - maximum: 100000.0 - title: Save Steps - type: integer - validation_data_dir: - anyOf: - - type: string - - type: 'null' - description: Optional directory to save the validation dataset to. If None, - validation is not run. - title: Validation Data Dir - validation_dataset_name: - anyOf: - - type: string - - type: 'null' - description: Optional dataset repository ID on Hugging Face to use for validation - title: Validation Dataset Name - title: TrainingParamsGr00T - type: object - TrainingParamsPi0: - additionalProperties: false - description: Training parameters for Pi0 model - properties: - batch_size: - anyOf: - - exclusiveMinimum: 0.0 - maximum: 128.0 - type: integer - - type: 'null' - description: Batch size for training, leave it to None to auto-detect based - on your dataset - title: Batch Size - data_dir: - default: data/ - description: The directory to save the dataset to - title: Data Dir - type: string - epochs: - default: 10 - description: Number of epochs to train for, default is 10 - exclusiveMinimum: 0.0 - maximum: 50.0 - title: Epochs - type: integer - learning_rate: - default: 0.0001 - description: Learning rate for training, default is 0.0001 - exclusiveMinimum: 0.0 - maximum: 1.0 - title: Learning Rate - type: number - output_dir: - default: outputs/ - description: The directory to save the model to - title: Output Dir - type: string - path_to_pi0_repo: - default: . - description: The path to the openpi repo. If not provided, will assume we - are in the repo. - title: Path To Pi0 Repo - type: string - train_test_split: - default: 1.0 - description: Train test split ratio, default is 1.0 (no split), should be - between 0 and 1 - exclusiveMinimum: 0.0 - maximum: 1.0 - title: Train Test Split - type: number - validation_dataset_name: - anyOf: - - type: string - - type: 'null' - description: Optional dataset repository ID on Hugging Face to use for validation - title: Validation Dataset Name - title: TrainingParamsPi0 - type: object - TrainingRequest: - description: 'Pydantic model for training request validation. - - This version consolidates all model name and parameter logic into a single - - validator to prevent redundant operations and fix the duplicate suffix bug.' - properties: - dataset_name: - description: Dataset repository ID on Hugging Face, should be a public dataset - title: Dataset Name - type: string - model_name: - anyOf: - - type: string - - type: 'null' - description: Name of the trained model to upload to Hugging Face, should - be in the format phospho-app/ or - title: Model Name - model_type: - description: Type of model to train, supports 'ACT', 'gr00t', and 'pi0' - enum: - - ACT - - ACT_BBOX - - gr00t - - pi0 - - custom - title: Model Type - type: string - private_mode: - default: false - description: Whether to use private training (PRO users only) - title: Private Mode - type: boolean - training_params: - anyOf: - - $ref: '#/components/schemas/TrainingParamsAct' - - $ref: '#/components/schemas/TrainingParamsActWithBbox' - - $ref: '#/components/schemas/TrainingParamsGr00T' - - $ref: '#/components/schemas/TrainingParamsPi0' - - type: 'null' - description: Training parameters for the model. - title: Training Params - user_hf_token: - anyOf: - - type: string - - type: 'null' - description: User's personal HF token for private training - title: User Hf Token - wandb_api_key: - anyOf: - - type: string - - type: 'null' - description: WandB API key for tracking training, you can find it at https://wandb.ai/authorize - title: Wandb Api Key - required: - - model_type - - dataset_name - title: TrainingRequest - type: object - TrainingsList: - properties: - models: - items: - $ref: '#/components/schemas/SupabaseTrainingModel' - title: Models - type: array - required: - - models - title: TrainingsList - type: object - UDPServerInformationResponse: - properties: - host: - title: Host - type: string - port: - title: Port - type: integer - required: - - host - - port - title: UDPServerInformationResponse - type: object - ValidationError: - properties: - loc: - items: - anyOf: - - type: string - - type: integer - title: Location - type: array - msg: - title: Message - type: string - type: - title: Error Type - type: string - required: - - loc - - msg - - type - title: ValidationError - type: object - VerifyEmailCodeRequest: - properties: - email: - title: Email - type: string - token: - title: Token - type: string - required: - - email - - token - title: VerifyEmailCodeRequest - type: object - VizSettingsResponse: - description: Settings for the vizualisation page. - properties: - height: - title: Height - type: integer - quality: - title: Quality - type: integer - width: - title: Width - type: integer - required: - - width - - height - - quality - title: VizSettingsResponse - type: object - VoltageReadResponse: - description: Response to read the torque of the robot. - properties: - current_voltage: - anyOf: - - items: - type: number - type: array - - type: 'null' - description: A list of length 6, with the current voltage of each joint. - If the robot is not connected, this will be None. - title: Current Voltage - required: - - current_voltage - title: VoltageReadResponse - type: object - WandBTokenRequest: - description: WandB token saved by the user. - properties: - token: - title: Token - type: string - required: - - token - title: WandBTokenRequest - type: object -info: - title: FastAPI - version: 0.1.0 -openapi: 3.1.0 -paths: - /: - get: - operationId: serve_dashboard__get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /admin: - get: - operationId: serve_dashboard_admin_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /admin/form/usersettings: - post: - operationId: submit_user_settings_admin_form_usersettings_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/AdminSettingsRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Submit User Settings - tags: - - pages - /admin/huggingface: - post: - operationId: submit_token_admin_huggingface_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/HuggingFaceTokenRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Submit Token - tags: - - pages - /admin/huggingface/whoami: - post: - operationId: whoami_admin_huggingface_whoami_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/HFWhoamIResponse' - description: Successful Response - summary: Whoami - tags: - - pages - /admin/settings: - get: - operationId: get_admin_settings_admin_settings_get - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/AdminSettingsResponse' - description: Successful Response - summary: Get Admin Settings - tags: - - pages - /admin/settings/tokens: - post: - operationId: get_admin_settings_token_admin_settings_tokens_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/AdminSettingsTokenResponse' - description: Successful Response - summary: Get Admin Settings Token - tags: - - pages - /admin/wandb: - post: - operationId: submit_wandb_token_admin_wandb_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/WandBTokenRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Submit Wandb Token - tags: - - pages - /ai-control/chat: - post: - description: Endpoint to handle AI control chat requests. - operationId: ai_control_chat_ai_control_chat_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/ChatRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/ChatResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Ai Control Chat - tags: - - chat - /ai-control/chat/log: - post: - description: Log the first chat request to the database. - operationId: log_chat_ai_control_chat_log_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/ChatRequest' - required: true - responses: - '200': - content: - application/json: - schema: {} - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Log Chat - tags: - - chat - /ai-control/feedback: - post: - operationId: feedback_ai_control_ai_control_feedback_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/FeedbackRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Feedback about the AI control session - tags: - - control - /ai-control/pause: - post: - description: Pause the auto control by AI. - operationId: pause_ai_control_ai_control_pause_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - summary: Pause the auto control by AI - tags: - - control - /ai-control/resume: - post: - description: Resume the auto control by AI. - operationId: resume_ai_control_ai_control_resume_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - summary: Resume the auto control by AI - tags: - - control - /ai-control/spawn: - post: - description: Start an inference server and return the server info. - operationId: spawn_inference_server_ai_control_spawn_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/StartServerRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/SpawnStatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Start an inference server - tags: - - control - /ai-control/start: - post: - description: Start the auto control by AI. - operationId: start_ai_control_ai_control_start_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/StartAIControlRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/AIControlStatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Start the auto control by AI - tags: - - control - /ai-control/status: - post: - description: Get the status of the auto control by AI. - operationId: fetch_auto_control_status_ai_control_status_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/AIStatusResponse' - description: Successful Response - summary: Get the status of the auto control by AI - tags: - - control - /ai-control/stop: - post: - description: Stop the auto control by AI. - operationId: stop_ai_control_ai_control_stop_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - summary: Stop the auto control by AI - tags: - - control - /auth: - get: - operationId: serve_dashboard_auth_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /auth/check-auth: - get: - description: 'Check if the user is authenticated by validating the session with - Supabase. - - Returns a JSON response indicating authentication status.' - operationId: is_authenticated_auth_check_auth_get - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/AuthResponse' - description: Successful Response - summary: Is Authenticated - tags: - - auth - /auth/confirm: - get: - operationId: serve_dashboard_auth_confirm_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - post: - operationId: confirm_email_auth_confirm_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/ConfirmRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/SessionReponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Confirm Email - tags: - - auth - /auth/forgot-password: - get: - operationId: serve_dashboard_auth_forgot_password_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - post: - description: Send a password reset email to the provided email address. - operationId: forgot_password_auth_forgot_password_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/ForgotPasswordRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Forgot Password - tags: - - auth - /auth/logout: - post: - operationId: logout_auth_logout_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - summary: Logout - tags: - - auth - /auth/reset-password: - get: - operationId: serve_dashboard_auth_reset_password_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - post: - description: Reset a user's password using the recovery tokens from the Supabase - reset email. - operationId: reset_password_auth_reset_password_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/ResetPasswordRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Reset Password - tags: - - auth - /auth/signin: - post: - description: Sign in an existing user. - operationId: signin_auth_signin_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/LoginCredentialsRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/SessionReponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Signin - tags: - - auth - /auth/signup: - post: - description: Sign up a new user. - operationId: signup_auth_signup_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/LoginCredentialsRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/SessionReponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Signup - tags: - - auth - /auth/verify-email-token: - post: - description: Verify the email confirmation code sent to the user. - operationId: verify_email_token_auth_verify_email_token_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/VerifyEmailCodeRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/SessionReponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Verify Email Token - tags: - - auth - /browse: - get: - operationId: serve_dashboard_browse_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /calibrate: - post: - description: Start the calibration sequence for the robot. - operationId: calibrate_calibrate_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/CalibrateResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Calibrate Robot - tags: - - control - /calibration: - get: - operationId: serve_dashboard_calibration_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /cameras/add-zmq: - post: - description: Add a camera feed from a ZMQ publisher. - operationId: add_zmq_camera_feed_cameras_add_zmq_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/AddZMQCameraRequest' - required: true - responses: - '200': - content: - application/json: - schema: - additionalProperties: true - title: Response Add Zmq Camera Feed Cameras Add Zmq Post - type: object - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Add Zmq Camera Feed - tags: - - camera - /cameras/refresh: - post: - description: Refresh the list of available cameras. This operation can take - a few seconds as it disconnects and reconnects to all cameras. It is useful - when cameras are added or removed while the application is running. - operationId: refresh_camera_list_cameras_refresh_post - responses: - '200': - content: - application/json: - schema: - additionalProperties: true - title: Response Refresh Camera List Cameras Refresh Post - type: object - description: Successful Response - summary: Refresh Camera List - tags: - - camera - /control: - get: - operationId: serve_dashboard_control_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /dataset/delete: - post: - operationId: delete_dataset_dataset_delete_post - parameters: - - in: query - name: path - required: true - schema: - title: Path - type: string - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Delete Dataset - tags: - - pages - /dataset/download: - get: - description: Download a folder as a ZIP file. - operationId: download_folder_dataset_download_get - parameters: - - in: query - name: folder_path - required: true - schema: - title: Folder Path - type: string - responses: - '200': - content: - application/json: - schema: {} - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Download Folder - tags: - - pages - /dataset/hf_download: - post: - operationId: hf_download_dataset_dataset_hf_download_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/HFDownloadDatasetRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Hf Download Dataset - tags: - - pages - /dataset/info: - post: - description: Get the dataset keys and frames. - operationId: get_dataset_info_dataset_info_post - parameters: - - in: query - name: path - required: true - schema: - title: Path - type: string - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/InfoResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Get Dataset Info - tags: - - pages - /dataset/list: - post: - description: List all datasets that are both in Hugging Face and locally. - operationId: list_datasets_dataset_list_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/DatasetListResponse' - description: Successful Response - summary: List Datasets - tags: - - pages - /dataset/merge: - post: - description: Merge two datasets into one. - operationId: merge_datasets_dataset_merge_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/MergeDatasetsRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Merge Datasets - tags: - - pages - /dataset/repair: - post: - description: 'Repair a dataset by removing any corrupted files. - - For now, this only works for parquets files. - - If the parquets are wrongly indexed, it will not do anything.' - operationId: repair_dataset_dataset_repair_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/DatasetRepairRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Repair Dataset - tags: - - pages - /dataset/shuffle: - post: - description: Shuffle a dataset in place. - operationId: shuffle_dataset_dataset_shuffle_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/DatasetShuffleRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Shuffle Dataset - tags: - - pages - /dataset/split: - post: - description: 'Split a dataset into two datasets. - - Used for creating training and validation datasets.' - operationId: split_dataset_dataset_split_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/DatasetSplitRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Split Dataset - tags: - - pages - /dataset/sync: - post: - operationId: sync_dataset_dataset_sync_post - parameters: - - in: query - name: path - required: true - schema: - title: Path - type: string - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Sync Dataset - tags: - - pages - /end-effector/read: - post: - description: Retrieve the position, orientation, and open status of the robot's - end effector. Only available for manipulators. - operationId: end_effector_read_end_effector_read_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - requestBody: - content: - application/json: - schema: - anyOf: - - $ref: '#/components/schemas/EndEffectorReadRequest' - - type: 'null' - title: Query - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/EndEffectorPosition' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Read End-Effector Position - tags: - - control - /episode/delete: - post: - description: 'Delete an episode from the dataset. - - Parameters: - - - episode_id: int: The episode ID to delete. - - - path: str: The path to the dataset folder.' - operationId: delete_episode_episode_delete_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/DeleteEpisodeRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Delete Episode - tags: - - pages - /files: - post: - operationId: files_files_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/BrowserFilesRequest' - required: true - responses: - '200': - content: - application/json: - schema: {} - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Files - tags: - - pages - /frames: - get: - description: Capture frames from all available cameras. Returns a dictionary - with camera IDs as keys and base64 encoded JPG images as values. If a camera - is not available or fails to capture, its value will be None. - operationId: get_all_camera_frames_frames_get - parameters: - - in: query - name: resize_x - required: false - schema: - anyOf: - - type: integer - - type: 'null' - title: Resize X - - in: query - name: resize_y - required: false - schema: - anyOf: - - type: integer - - type: 'null' - title: Resize Y - responses: - '200': - content: - application/json: - example: - '0': base64_encoded_image_string - realsense: base64_encoded_image_string - schema: - additionalProperties: - anyOf: - - type: string - - type: 'null' - title: Response Get All Camera Frames Frames Get - type: object - description: Successfully captured frames from available cameras - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - '500': - description: Server error while capturing frames - summary: Get All Camera Frames - tags: - - camera - /gravity/start: - post: - description: Enable gravity compensation for the robot. - operationId: start_gravity_gravity_start_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Start Gravity - tags: - - control - /gravity/stop: - post: - description: Stop the gravity compensation. - operationId: stop_gravity_compensation_gravity_stop_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - summary: Stop the gravity compensation - tags: - - control - /inference: - get: - operationId: serve_dashboard_inference_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /joints/read: - post: - description: Read the current positions of the robot's joints in radians and - motor units. - operationId: read_joints_joints_read_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - requestBody: - content: - application/json: - schema: - anyOf: - - $ref: '#/components/schemas/JointsReadRequest' - - type: 'null' - title: Request - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/JointsReadResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Read Joint Positions - tags: - - control - /joints/write: - post: - description: Move the robot's joints to the specified angles. - operationId: write_joints_joints_write_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/JointsWriteRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Write Joint Positions - tags: - - control - /local/scan-devices: - post: - description: Endpoint to list all devices connected to the system. - operationId: list_connected_devices_local_scan_devices_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/ScanDevicesResponse' - description: Successful Response - summary: List Connected Devices - tags: - - networking - /model/configuration: - post: - description: Fetch the model info from Hugging Face and return its configuration. - operationId: get_model_configuration_model_configuration_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/ModelConfigurationRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/ModelConfigurationResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Get Model Configuration - tags: - - pages - /move/absolute: - post: - description: Move the robot to an absolute position specified by the end-effector - (in centimeters and degrees). Make sure to call `/move/init` before using - this endpoint. - operationId: move_to_absolute_position_move_absolute_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/MoveAbsoluteRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Move to Absolute Position - tags: - - control - /move/hello: - post: - description: Make the robot say hello by opening and closing its gripper. (Test - endpoint) - operationId: say_hello_move_hello_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Make the robot say hello (test endpoint) - tags: - - control - /move/init: - post: - description: Initialize the robot to its initial position before starting the - teleoperation. - operationId: move_init_move_init_post - parameters: - - in: query - name: robot_id - required: false - schema: - anyOf: - - type: integer - - type: 'null' - title: Robot Id - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Initialize Robot - tags: - - control - /move/leader/start: - post: - description: Use the leader arm to control the follower arm. - operationId: start_leader_follower_move_leader_start_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/StartLeaderArmControlRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Use the leader arm to control the follower arm - tags: - - control - /move/leader/stop: - post: - description: Stop the leader-follower control. - operationId: stop_leader_follower_move_leader_stop_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - summary: Stop the leader-follower control - tags: - - control - /move/relative: - post: - description: Move the robot to a relative position based on received delta values - (in centimeters and degrees). - operationId: move_relative_move_relative_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/RelativeEndEffectorPosition' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Move to Relative Position - tags: - - control - /move/sleep: - post: - description: Put the robot to its sleep position by giving direct instructions - to joints. This function disables the torque. - operationId: move_sleep_move_sleep_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Put the robot to its sleep position - tags: - - control - /move/teleop: - post: - operationId: move_teleop_post_move_teleop_post - parameters: - - in: query - name: robot_id - required: false - schema: - anyOf: - - type: integer - - type: 'null' - title: Robot Id - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/AppControlData' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Teleoperation Control - tags: - - control - /move/teleop/udp: - post: - description: Start a UDP server to send and receive teleoperation data to the - robot. - operationId: move_teleop_udp_move_teleop_udp_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/UDPServerInformationResponse' - description: Successful Response - summary: Move Teleop Udp - tags: - - control - /move/teleop/udp/stop: - post: - description: Stop the UDP server main loop. - operationId: stop_teleop_udp_move_teleop_udp_stop_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - summary: Stop Teleop Udp - tags: - - control - /network: - get: - operationId: serve_dashboard_network_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /network/connect: - post: - description: 'Endpoint to connect phosphobot to a new network. - - Returns immediately and performs connection in the background. - - Will fallback to the hotspot if it fails to connect.' - operationId: switch_to_network_network_connect_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/NetworkCredentials' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Switch To Network - tags: - - networking - /network/hotspot: - post: - description: 'Endpoint to activate the hotspot on the Raspberry Pi. - - Returns immediately and performs setup in the background.' - operationId: activate_hotspot_network_hotspot_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - summary: Activate Hotspot - tags: - - networking - /network/scan-devices: - post: - description: 'Endpoint to list all IP addresses on the local network. - - Returns a list of IP addresses.' - operationId: list_local_network_ips_network_scan_devices_post - requestBody: - content: - application/json: - schema: - anyOf: - - $ref: '#/components/schemas/ScanNetworkRequest' - - type: 'null' - title: Query - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/ScanNetworkResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: List Local Network Ips - tags: - - networking - /recording/play: - post: - description: Play a recorded episode. - operationId: play_recording_recording_play_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/RecordingPlayRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Play Recording - tags: - - recording - /recording/start: - post: - description: 'Asynchronously start recording an episode in the background. - - Output format is chosen when stopping the recording.' - operationId: start_recording_episode_recording_start_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/RecordingStartRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Start Recording Episode - tags: - - recording - /recording/stop: - post: - description: Stop the recording of the episode. The data is saved to disk to - the user home directory, in the `phosphobot` folder. - operationId: stop_recording_episode_recording_stop_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/RecordingStopRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/RecordingStopResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Stop Recording Episode - tags: - - recording - /robot/add-connection: - post: - description: 'Manually add a robot connection to the robot manager. - - Useful for adding robot that are accessible only via WiFi, for example.' - operationId: add_robot_connection_robot_add_connection_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/RobotConnectionRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/RobotConnectionResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Add Robot Connection - tags: - - control - /robot/config: - post: - description: Get the configuration of the robot. - operationId: get_robot_config_robot_config_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/RobotConfigResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Get Robot Config - tags: - - control - /robot/disconnect: - post: - description: 'Manually add a robot connection to the robot manager. - - Useful for adding robot that are accessible only via WiFi, for example.' - operationId: disconnect_robot_robot_disconnect_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Disconnect Robot - tags: - - control - /sign-in: - get: - operationId: serve_dashboard_sign_in_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /sign-up: - get: - operationId: serve_dashboard_sign_up_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /sign-up/confirm: - get: - operationId: serve_dashboard_sign_up_confirm_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /status: - get: - description: Get the status of the server. - operationId: status_status_get - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/ServerStatus' - description: Successful Response - summary: Status - /teleop/settings: - post: - description: Update teleoperation settings such as VR scaling factor. - operationId: update_teleop_settings_teleop_settings_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/TeleopSettingsRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Update Teleop Settings - tags: - - control - /teleop/settings/read: - post: - description: Get current teleoperation settings. - operationId: read_teleop_settings_teleop_settings_read_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/TeleopSettings' - description: Successful Response - summary: Read Teleop Settings - tags: - - control - /temperature/read: - post: - description: Read the current Temperature and maximum Temperature of the robot's - motors. - operationId: read_temperature_temperature_read_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/TemperatureReadResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Read Temperature - tags: - - control - /temperature/write: - post: - description: Set the robot's maximum temperature for motors.. - operationId: write_temperature_temperature_write_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/TemperatureWriteRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Write the Maximum Temperature for Joints - tags: - - control - /torque/read: - post: - description: Read the current torque of the robot's joints. - operationId: read_torque_torque_read_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/TorqueReadResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Read Torque - tags: - - control - /torque/toggle: - post: - description: Enable or disable the torque of the robot. - operationId: toggle_torque_torque_toggle_post - parameters: - - in: query - name: robot_id - required: false - schema: - anyOf: - - type: integer - - type: 'null' - title: Robot Id - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/TorqueControlRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Toggle Torque - tags: - - control - /train: - get: - operationId: serve_dashboard_train_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /training/cancel: - post: - description: Cancel a training job - operationId: cancel_training_training_cancel_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/CancelTrainingRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Cancel Training - tags: - - training - /training/info: - post: - description: '- Fetch the info.json from the model repo and return the training - info. - - - If the model type is "custom", return a custom command to run the training.' - operationId: get_training_info_training_info_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/TrainingInfoRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/TrainingInfoResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Get Training Info - tags: - - pages - /training/logs/{log_file}: - get: - description: Stream the logs from a log file - operationId: stream_logs_training_logs__log_file__get - parameters: - - in: path - name: log_file - required: true - schema: - title: Log File - type: string - responses: - '200': - content: - application/json: - schema: {} - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Stream Logs - tags: - - training - /training/models/read: - post: - description: Get the list of models with aggregated AI control session metrics - operationId: get_models_training_models_read_post - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/TrainingsList' - description: Successful Response - summary: Get Models - tags: - - training - /training/start: - post: - description: Start training an ACT or gr00t model on the specified dataset. - This will upload a trained model to the Hugging Face Hub using the main branch - of the specified dataset. - operationId: start_training_training_start_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/TrainingRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StartTrainingResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Start training a model - tags: - - training - /training/start-custom: - post: - operationId: start_custom_training_training_start_custom_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/CustomTrainingRequest' - required: true - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/StatusResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Start Custom Training - tags: - - training - /update/upgrade-to-latest-version: - get: - description: 'Upgrade the teleop software to the latest available version. - - Checks the latest available version and upgrades the software if necessary. - - Works only on raspberry pi devices.' - operationId: upgrade_to_latest_version_update_upgrade_to_latest_version_get - responses: - '200': - content: - application/json: - schema: - additionalProperties: true - title: Response Upgrade To Latest Version Update Upgrade To Latest - Version Get - type: object - description: Successful Response - summary: Upgrade To Latest Version - tags: - - update - /update/version: - post: - description: 'Get the latest available version of the teleop software. - - Works only on raspberry pi devices.' - operationId: get_latest_available_version_update_version_post - parameters: - - in: query - name: run_quick - required: false - schema: - default: false - title: Run Quick - type: boolean - responses: - '200': - content: - application/json: - schema: - additionalProperties: true - title: Response Get Latest Available Version Update Version Post - type: object - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Get Latest Available Version - tags: - - update - /video/{camera_id}: - get: - description: Stream video feed of the specified camera. If no camera id is provided, - the default camera is used. Specify a target size and quality using query - parameters. - operationId: video_feed_for_camera_video__camera_id__get - parameters: - - in: path - name: camera_id - required: true - schema: - anyOf: - - type: integer - - type: 'null' - title: Camera Id - - in: query - name: height - required: false - schema: - anyOf: - - type: integer - - type: 'null' - title: Height - - in: query - name: width - required: false - schema: - anyOf: - - type: integer - - type: 'null' - title: Width - - in: query - name: quality - required: false - schema: - anyOf: - - type: integer - - type: 'null' - title: Quality - responses: - '200': - content: - application/json: - schema: {} - description: Streaming video feed of the specified camera. - '404': - description: Camera not available - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Video Feed For Camera - tags: - - camera - /viz: - get: - operationId: serve_dashboard_viz_get - responses: - '200': - content: - text/html: - schema: - type: string - description: Successful Response - summary: Serve Dashboard - tags: - - pages - /viz/settings: - get: - description: Page with an overview of the connected cameras. Open this page - in the chrome browser. - operationId: get_viz_settings_viz_settings_get - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/VizSettingsResponse' - description: Successful Response - summary: Get Viz Settings - tags: - - pages - /voltage/read: - post: - description: Read the current voltage of the robot's motors. - operationId: read_voltage_voltage_read_post - parameters: - - in: query - name: robot_id - required: false - schema: - default: 0 - title: Robot Id - type: integer - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/VoltageReadResponse' - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - summary: Read Voltage - tags: - - control diff --git a/mintlify/package.json b/mintlify/package.json deleted file mode 100644 index 7df0c4c..0000000 --- a/mintlify/package.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "dependencies": { - "@img/sharp-darwin-arm64": "^0.33.1", - "mintlify": "^4.0.331", - "sharp": "^0.33.3" - } -} diff --git a/mintlify/recording/play-recording.mdx b/mintlify/recording/play-recording.mdx deleted file mode 100644 index b9b9225..0000000 --- a/mintlify/recording/play-recording.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /recording/play ---- \ No newline at end of file diff --git a/mintlify/recording/start-recording-episode.mdx b/mintlify/recording/start-recording-episode.mdx deleted file mode 100644 index 1b2cb92..0000000 --- a/mintlify/recording/start-recording-episode.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /recording/start ---- \ No newline at end of file diff --git a/mintlify/recording/stop-recording-episode.mdx b/mintlify/recording/stop-recording-episode.mdx deleted file mode 100644 index 467a08a..0000000 --- a/mintlify/recording/stop-recording-episode.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /recording/stop ---- \ No newline at end of file diff --git a/mintlify/snippets/get-mq-app.mdx b/mintlify/snippets/get-mq-app.mdx deleted file mode 100644 index 346929f..0000000 --- a/mintlify/snippets/get-mq-app.mdx +++ /dev/null @@ -1,20 +0,0 @@ - - - Unlock access to VR Control, advanced AI training, and more. - - - - Get the phospho teleoperation app on the Meta Store for Meta Quest 2, Pro, 3, and 3s. - - - - - If you bought our [phospho starter pack](https://robots.phospho.ai/starter-pack), you should have received a link to get the phospho teleoperation Meta Quest app. Please [reach out](mailto:contact@phospho.ai) if not. diff --git a/mintlify/snippets/install-code.mdx b/mintlify/snippets/install-code.mdx deleted file mode 100644 index aff992c..0000000 --- a/mintlify/snippets/install-code.mdx +++ /dev/null @@ -1,32 +0,0 @@ - - -```bash macOS -curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | bash -``` - -```bash Linux -curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | sudo bash -``` - -```powershell Windows -powershell -ExecutionPolicy ByPass -Command "irm https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.ps1 | iex" -``` - -```bash uv (Linux and macOS) -# Install uv: https://docs.astral.sh/uv/ -curl -LsSf https://astral.sh/uv/install.sh | sh - -# Run phosphobot -uvx phosphobot@latest run -``` - -```powershell uv (Windows) -# Install uv: https://docs.astral.sh/uv/ -powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" - -# Run phosphobot -uvx phosphobot@latest run -``` - - - \ No newline at end of file diff --git a/mintlify/snippets/teleop-instructions.mdx b/mintlify/snippets/teleop-instructions.mdx deleted file mode 100644 index e4982c8..0000000 --- a/mintlify/snippets/teleop-instructions.mdx +++ /dev/null @@ -1,32 +0,0 @@ -The phospho teleoperation app works with a [Meta Quest](https://www.meta.com/fr/quest/quest-3/?srsltid=AfmBOorMLUmJKFQr35ssCi1DDqSNgpHk0sLHqo_tHG8kgclCYbMToAPa). Compatible models: Pro, 2, 3, 3s. - -1. In the Meta Quest, open the phospho teleop application. Wait a moment, then you should see a row displaying **phosphobot** or your computer name. Click the **Connect** button using the `Trigger Button`. - -Make sure you're connected to the same WiFi as the phosphobot server or the control module - -If you don't see the server, check the IP address and port of the server in the phosphobot dashboard and enter it manually. - -![Select Phosphobot server](/assets/meta-quest-server-list.png) - -2. After connecting, you'll see the list of connected cameras and recording options. - -- Move the windows with the `Grip button` to organize your space. -- Enable preview to see the camera feed. Check the **camera angles** and adjust their positions if needed. - -We recommend **disabling** the camera preview to save bandwidth. - -![Meta Quest controller button names](/assets/names_buttons.jpg) - -3. Press `A` once to start teleoperation and begin moving your controller. - - - The robot will naturally follow the movement of your controller. Press the `Trigger button` to close the gripper. - - Press `A` again to stop the teleoperation. The robot will stop. - -4. Press `B` to start recording. You can leave the default settings for your first attempt. - - - Press `B` again to stop the recording. - - Press `Y` (left controller) to discard the recording. - -5. Continue teleoperating and stop the recording by pressing `B` when you're done. - -6. The recording is automatically saved in **LeRobot v2** format and **uploaded to your HuggingFace account.** \ No newline at end of file diff --git a/mintlify/so-100/quickstart.mdx b/mintlify/so-100/quickstart.mdx deleted file mode 100644 index 3bc8c8a..0000000 --- a/mintlify/so-100/quickstart.mdx +++ /dev/null @@ -1,161 +0,0 @@ ---- -title: "SO-100 quickstart guide" -description: "How to set up phosphobot and control your SO-100 robot arm" ---- - -import InstallCode from '/snippets/install-code.mdx'; - - - -## Get your SO-100 robot arm - -The SO-100 robot arm is a 5-DOF robotic arm with a 1-DOF gripper. It's a popular robot arm for AI robotics, with many users. - -The SO-100 robot arm is [open source](https://github.com/TheRobotStudio/SO-ARM100) and can be built using off-the-shelf components and 3D printed parts. - -If you're looking to buy SO-100 robot arms already assembled, with cameras and software, you can get a starter pack [on our shop](https://robots.phospho.ai). - -## How to build the SO-100 leader arm? Step by step assembly guide - -Here is a step-by-step video guide to build the SO-100 robot arm: - - - -- [Parts list](https://github.com/TheRobotStudio/SO-ARM100) -- [Configure the motors](https://github.com/phospho-app/phosphobot/tree/main/scripts/feetech) - -## Attach the SO-100 arm - -Find a table and fix the SO-100 robot arm using the 2 table clamps in the kit (see image below). - -Make sure the arm is securely fastened and won't move. Clear away any clutter that could get in the way of the arm's movement. - -![SO-100 fixed using clamps](/assets/so100clamps.jpg) - -## Plug everything together - -In this order: - -1. Plug the _SO-100 robot arm_ into the power supply using the **black** 12V power supply. -2. Plug one end of the **USB-C cable** into the _SO-100 robot arm_ and the other into your computer (laptop, raspberry pi, etc). -3. If you have additional cameras, plug them into your computer - -Below is an example of a full setup with a SO-100 arm, a stereoscopic camera plugged on a Raspberry Pi. - -![Full setup](/assets/pdk1_plugged.jpg) - -## Start phosphobot - -Once everything is connected and powered on, run the following command in a terminal to install the phosphobot software: - - - -Then, fire up the the server: - -```bash -phosphobot run -``` - -Go to `localhost` in your web browser to access the phosphobot dashboard. Go to **Control** to control your robot using only your keyboard! - -![phosphobot dashboard](/assets/phosphobot-dashboard.png) - -## Calibrate the robot - -This step is only relevant if you built your own robot arm: the [assembled robots](https://robots.phospho.ai) that we ship are **already calibrated**. - -Your SO-100 robot should be automatically detected by the phosphobot software. However, you still need to **calibrate** the robot to make it work properly. - -Follow this video guide to calibrate your robot: - - - -Here are detailed written instructions to calibrate your SO-100 robot arm: - - -1. Go to the page **Calibration** in the dashboard. -2. Make sure to be able to securely catch your robot, as calibration disable torque. This can make your robot fall. -3. Follow the instructions on the screen to calibrate your robot. - - -**Position 1:** Arm is facing **forward**. Gripper is fully **closed**. - -![Calibration Position 1](/assets/Calibration-position-1.jpg) - -red is x, green is y, blue is z - -**Position 2:** Arm is twisted to its **left**. Gripper is fully **open**. - -![Calibration Position 1](/assets/Calibration-position-2.jpg) - -red is x, green is y, blue is z - -After moving to position 2, **finish** the calibration sequence by clicking the button. - - - -Your calibration is saved in the `~/phosphobot/calibration` folder of your home directory. You can edit this file in a text editor to further refine the calibration. For example, to tune the PID. - -### About LeRobot calibration - -If you're using LeRobot framework for training, you'll need to calibrate the robot with LeRobot after this phosphobot calibration. - -You need first to calibrate with phosphobot, then with LeRobot. The LeRobot calibration process is compatible with phosphobot, but independent. This only needs to be done once. - - -# Start controlling the robot - -Your robot is now ready to receive commands! - -In the dashboard, click the **Control** button to control the robot arm. In the first tab, you can control the robot with the keyboard. - -You can also use leader arm control, a gamepad, or a Meta Quest VR headset. [Learn more.](/basic-usage/teleop") - -![Robot_Controller](/assets/controll_schema.png) - - -# Next steps - -You can now [control your robot](/basic-usage/teleop), [record your first dataset](/basic-usage/dataset-recording) and [train an AI model](/basic-usage/training) to make the robot move by itself. - - - - Teleoperate a real robot - - - How to record a dataset with your robot - - - How to train an AI model from a dataset you recorded - - diff --git a/mintlify/so-101/quickstart.mdx b/mintlify/so-101/quickstart.mdx deleted file mode 100644 index 0c80c29..0000000 --- a/mintlify/so-101/quickstart.mdx +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: "SO-101 Quickstart Guide" -description: "How to set up phosphobot and control your SO-101 robot arms" ---- - -import InstallCode from '/snippets/install-code.mdx'; - -## Get Your SO-101 Robot Arms - -The SO-101 is a robot setup consisting of two 6-DOF arms (5 DOF body, 1 DOF gripper). One of the robot arms is the "leader" arm, and the other is the "follower" arm. The operator moves the leader arm, and the follower arm mimics the movements. It's a popular setup for teleoperation and imitation learning in AI robotics. - -The SO-101 robot arms are [open source](https://github.com/TheRobotStudio/SO-ARM100), and you can build them using off-the-shelf components and 3D-printed parts. - -If you're looking to buy pre-assembled SO-101 robot arms, with cameras and software, you can get a starter pack [on our shop](https://robots.phospho.ai). - -### SO-100 vs. SO-101 - -The differences between the SO-100 and SO-101 are **subtle.** Software compatible with the SO-100 will work with the SO-101, but there are some improvements in the SO-101. - -- SO-101 3D printed parts are easier to assemble. The base part no longer requires a small screwdriver, the wrist is easier to 3D print, you can't rotate the wrist more than 360 degrees, and the wires are outside. This makes maintenance easier and assembly faster. -- To assemble the SO-101 leader arm, it is now recommended to keep the gears in the motors. The gears of the leader arm have different gear ratios. - -## Assembly Guide for the SO-101 - -A video tutorial is available to guide you through the assembly of the SO-101 follower arm. The assembly for the leader arm is similar. Note that the leader and follower arms use motors with different gear ratios to ensure the leader is easy to move while still supporting its weight. - - - -- **Parts List & 3D Models:** [https://github.com/TheRobotStudio/SO-ARM100](https://github.com/TheRobotStudio/SO-ARM100) -- **Motor Configuration:** [https://github.com/phospho-app/phosphobot/tree/main/scripts/feetech](https://github.com/phospho-app/phosphobot/tree/main/scripts/feetech) - -## Secure the SO-101 Arms - -Choose a stable surface and firmly attach both the leader and follower SO-101 robot arms using the provided table clamps. Separate them by about 50cm to ensure they can move freely without colliding. - -Ensure both arms are securely fastened to prevent any movement during operation. Keep the surrounding area clear of any obstacles that could impede the arms' movement. - -![SO-100 fixed using clamps](/assets/so100clamps.jpg) - -## Connecting the Components - -Follow this sequence to connect your setup: - -1. Connect both the **leader and follower arms** to their power supplies. Make sure the voltage matches the specifications of the motors (6V or 12V depending on the motors you use). -2. For each arm, plug one end of a **USB-C cable** into the arm's controller and the other end into your computer (e.g., laptop, Raspberry Pi). -3. Connect any additional cameras to your computer. - -![Full setup](/assets/pdk1_plugged.jpg) - -## Start phosphobot - -With all components connected and powered on, open a terminal and run the following command to install the phosphobot software: - - - -Next, start the server: - -```bash -phosphobot run -``` - -Navigate to `localhost` in your web browser to access the phosphobot dashboard, where you can find the **Control** section to operate your robot with the leader arm. - -![phosphobot dashboard](/assets/phosphobot-dashboard.png) - -## Calibrating the Robot Arms - -If you purchased pre-assembled robots, they should already be calibrated. This step is for those who have built their own arms. - -The phosphobot software should automatically detect your SO-101 arms, but **calibration is necessary** for proper functionality. This ensures that the leader and follower arms have the same position values when they are in the same physical configuration. A video guide is available to walk you through the calibration process with phosphobot. - - - -1. You will need to calibrate **each arm separately**. -2. Navigate to the **Calibration** page in the dashboard. -3. Be prepared to securely hold the arm you are calibrating, as the process will disable motor torque, which could cause it to fall. -4. Follow the positions in the video to calibrate each robot arm one by one. - -Your calibration settings are saved in the `~/phosphobot/calibration` folder in your home directory, where you can manually edit the file for finer adjustments. - -### About LeRobot calibration - -If you're using the LeRobot framework for training, you'll need to calibrate the robot with LeRobot after this phosphobot calibration. - -You need to first calibrate with phosphobot, then with LeRobot. The LeRobot calibration process is compatible with phosphobot, but independent. This only needs to be done once. - - -Why is this the case? - -This is because LeRobot calibrates a SO-101 by moving the arm to the maximum and minimum positions of each joint. Then, LeRobot saves the min and max limits **inside** the servomotors. The servos can't move outside these limits. - -phosphobot, on the other hand, sets the "first calibration position" as the **position zero** of each servo. The min and max limits are **not** updated. Because of this, the LeRobot's min and max limits keep their old values, and the servos can't move outside of these stale ranges. - -To solve this, you need to re-calibrate the robot with LeRobot **after calibrating with phosphobot** to ensure that the min and max limits are correct. The calibrations are compatible, but independent. - - -# Controlling the Robot - -Your robot is now ready for operation! - -From the dashboard, select **Control** then the **Leader arm** tab to control the follower arm by moving the leader arm. Y - -You can also use Keyboard Control to manipulate the robot arm using your keyboard, a gamepad, or a Meta Quest VR headset. [Learn more.](/basic-usage/teleop") - -![Robot_Controller](/assets/controll_schema.png) - -# Next Steps - -You are now set to explore more advanced functionalities: - - - - Teleoperate a real robot - - - How to record a dataset with your robot - - - How to train an AI model from a dataset you recorded - - \ No newline at end of file diff --git a/mintlify/unboxings/dk1.mdx b/mintlify/unboxings/dk1.mdx deleted file mode 100644 index 5520d49..0000000 --- a/mintlify/unboxings/dk1.mdx +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: "Dev kit unboxing (DK1)" -description: "Unbox and set up your phospho dev kit." ---- - -In this guide, we will unbox and set up the **first version** of the phospho dev kit (DK1). - -We no longer sell these dev kits. If you have the DK2, see the [DK2 unboxing guide](/unboxings/dk2). - -![packshot phospho dev kit dk1](/assets/packshot-dk1.jpg) - -## What's in the box? - -phospho dev kits come with EU power plugs. - -- **Robot arm** - - 1x SO-100 robot arm - - 1x 12V power source (for the arm) - - 1x USB-C to USB-C cable - - 1x USB-C to USB adapter - - 2x Table clamps -- **Camera** - - 1x Stereoscopic camera - - 1x USB-C to USB cable - - 1x Camera stand -- **Control module** - - 1x Control module - - 1x Raspberry Pi USB-C power supply - - 1x Micro SD card adapter - -## 1. Attach the SO-100 arm - -Find a table and fix the SO-100 robot arm using the 2 table clamps in the kit (see image below). - -Make sure the arm is securely fastened and won't move. Clear away any clutter that could get in the way of the arm's movement. - -![SO-100 fixed using clamps](/assets/so100clamps.jpg) - -## 2. Plug everything together - -In this order: - -1. Plug the _SO-100 robot arm_ into the power supply using the **black** 12V power supply. -2. Plug one end of the **USB-C cable** into the _SO-100 robot arm_ and the other into any **front USB port** on the **control module** (use the USB-C to USB **adapter** in the kit). -3. Attach the **stereoscopic camera** to the **camera stand** and place it next to the robot arm. -4. Plug the **stereoscopic camera** into one of the control module **front USB ports**. -5. Plug the **control module** into the **white** power supply (this goes into the **USB-C port on the side of the control module**). - -![Full setup](/assets/pdk1_plugged.jpg) - -## 3. Connect your control module to your home WiFi - -After plugging in the control module, look at the LED indicator: it should blink **four times quickly** and then pause. This means it is in **hotspot** mode (ready for setup). - -Now, let's connect the control module to your home WiFi so it can communicate with your devices. - -### Connect to the control module hotspot - -Using your computer or phone, connect to the control module's WiFi network: -- Open the WiFi settings on your device -- Look for a network called `phosphobot` in your WiFi list and connect to it. -- Enter the password: `phosphobot123`. - -### Access the control module dashboard - -In your browser, go to [phosphobot.local](http://phosphobot.local). This is the **dashboard** to control and set up your control module. - -_On Android, we recommend using the Chrome browser._ - -### Connect to your home WiFi - -1. On `phosphobot.local`, go to `Network Management`. Enter the **network name** (WiFi SSID) and **password** of your WiFi network. - - -The network name is cAsE sEnSiTiVe and should be exactly as seen on your device/router. Double-check for typos. - - -2. The control module will now connect to your WiFi network. If the connection is successful, the LED becomes **solid green.** - - -If the LED **blinks slowly** (1-second intervals), it means the connection failed. Try these steps: -- Restart the control module by long-pressing the button next to the LED. -- Reconnect to the `phosphobot` WiFi network and try again. - - -3. Connect your computer back to your home WiFi network (the one you entered in the dashboard). - -4. Reload the page [phosphobot.local](http://phosphobot.local) to access the control module dashboard. - -5. You're done! Click on `Keyboard Control` and then on `Start Moving Robot`. Follow the instructions to control the robot with your keyboard. If this works, you're ready to send your first commands. - -If this fails, restart the control module by long-pressing the button next to the LED. Then, start over this section. - - -Every time the control module is powered on, it will check for updates and install them automatically. They will be available the next time you power it on. - - - - -Using the **BTBerryWiFi** app, you can use Bluetooth to connect the control module to your home WiFi. - -_Special thanks to its creator Norm Frenette for this awesome app!_ - -### Step 1: Download the BTBerryWiFi app - -Download the _BTBerryWiFi_ app for your smartphone: - -- on **iPhone,** [download the app from the AppStore.](https://apps.apple.com/us/app/btberrywifi/id1596978011) -- on **Android,** [download the app here.](https://drive.google.com/drive/folders/12l5lCZS4T8wHfdSLyCGM-hrzh64EM-zo?usp=sharing) Install it with sideloading. _Note: the free version of the Android app only works for 7 days._ - -### Step 2: Reboot the control module - -1. Long press the _power button_ of the control module until the LED turns red. -2. Press the _power button_ again and keep it pressed until the LED turns green. -3. Wait for the control module to boot up. When the LED blinks green slowly and regularly, your control module is ready for pairing. - - -Make sure no device is connected to the control module through WiFi. - - -### Step 3: Connect to WiFi with BTBerryWiFi - -For reference, here's the link to the [full user guide](https://www.btberrywifi.com/). - -1. Launch **BTBerryWiFi.** Accept the authorization request to use Bluetooth. Click on the button "Scan for Raspberry Pi". - -2. Wait for _phosphobot_ to show up in the list below, then select it. - -Sometimes, you can see _no_name_ instead of _phosphobot_ in the list. If so, select _no name_ and carry on. - -![Select phosphobot](/assets/rpi-1.png) - -If no device shows up: close the app, turn Bluetooth off and on on your smartphone, and reboot the control module. Then start again. - -3. Wait for your smartphone to pair with the control module, then for the WiFi access points to appear. This can take up to a minute. - -![Wait for connection](/assets/rpi-2.png) - -If you stay stuck on this screen for longer than 2 minutes, close the app, turn Bluetooth off and on on your smartphone, and reboot the control module. Then start again. - -4. Select your home WiFi in the list. Then, enter the WiFi password. This will connect the control module to the WiFi network. - -![Select a WiFi network](/assets/rpi-3.png) - -Then enter the WiFi password and press connect. - -![Enter password](/assets/rpi-4.png) - -This step may fail if you're trying to connect to a WiFi hotspot. If this happens to you, learn how to flash your own SD card [using this guide.](https://phospho-ai.notion.site/How-to-setup-the-Raspberry-Pi-with-phospho-teleop-server-1848ca0b4c0480c28281eadb5d1245ee?pvs=4) - -5. The control module is now connected to WiFi. Access the control module dashboard on the URL [phosphobot.local](http://phosphobot.local). - -6. You're done! Click on "Keyboard Control" and then on "Start Moving Robot" to test the connection. Everything works? Great! You can follow the instruction and control your robot with your keyboard. - - - -## What's next? - - - - Teleoperate a real robot - - - How to record a dataset with your robot - - - How to train an AI model from a dataset you recorded - - - Join the Discord to ask questions, get help from others and get updates - - \ No newline at end of file diff --git a/mintlify/unboxings/dk2.mdx b/mintlify/unboxings/dk2.mdx deleted file mode 100644 index a71266d..0000000 --- a/mintlify/unboxings/dk2.mdx +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: "Starter pack unboxing (DK2)" -description: "Unbox and set up your phospho starter pack." ---- - -import InstallCode from '/snippets/install-code.mdx'; - -In this guide, we will unbox and set up the **phospho starter pack** (DK2). - -We are currently taking orders [here](https://robots.phospho.ai). - -![packshot phospho starter pack dk2](/assets/packshot-dk2.jpg) - -## What's in the box? - -phospho dev kits come with EU power plugs. - -- **2 Robot arm** - - 2x SO-100 robot arm - - 2x 12V power source (for the arms) - - 2x USB-C to USB-C cable - - 4x Table clamps -- **2 Wrist Cameras** - - 2x Wrist cameras - - 2x Camera cables (USB) -- **Access to the [Meta Quest app](../examples/teleop)** for VR control - - - - - -## 1. Attach the SO-100 arms - -Find a table and fix each SO-100 robot arm using the table clamps in the kit (see image below). - -Make sure the arm is securely fastened and won't move. Clear away any clutter that could get in the way of the arm's movement. - -![SO-100 fixed using clamps](/assets/so100clamps.jpg) - -## 2. Install the wrist cameras - -For safe transport, the wrist cameras are not installed on the robot arms. You don't need screws or special tools to install them, just pop them in the holes on the robot arms. - -Please refer to the video below to see how to install the wrist cameras. - - - -If you need to transport the robot arms, you can remove the wrist cameras by pulling them out of the holes. - - -## 3. Plug everything together - -In this order: - -1. Plug each _SO-100 robot arm_ into the power supply using the **black** 12V power supply. -2. Plug each of the **USB-C cable** into the _SO-100 robot arm_ and the other into your computer. -3. Plug the cable to the **wrist cameras** and into your computer. - -## 4. Install and run the phosphobot software - -Once everything is connected and powered on, run the following command in a terminal to install the phosphobot software: - - - - -Then, fire up the the server: - -```bash -phosphobot run -``` - -Go to `localhost` in your web browser to access the phosphobot dashboard. Go to **Control** to control your robot using only your keyboard! - -![phosphobot dashboard](/assets/phosphobot-dashboard.png) - -## What's next? - - - - Teleoperate a real robot - - - How to record a dataset with your robot - - - How to train an AI model from a dataset you recorded - - - Join the Discord to ask questions, get help from others and get updates - - \ No newline at end of file diff --git a/mintlify/welcome.mdx b/mintlify/welcome.mdx deleted file mode 100644 index 172765d..0000000 --- a/mintlify/welcome.mdx +++ /dev/null @@ -1,123 +0,0 @@ ---- -title: "Welcome to phosphobot" -description: "An absurdly simple way to train AI models for real-world robots, built for ML engineers." ---- - -import InstallCode from '/snippets/install-code.mdx'; -import GetMQApp from '/snippets/get-mq-app.mdx'; - -Phospho is how ML engineers make real robots intelligent. -We provide the hardware, libraries, and remote control capabilities so developers can collect data, -train AI models and deploy applications to real robots in minutes instead of months. - -# Highlights - -- ๐Ÿ•น๏ธ Control your robots to record datasets in minutes with a keyboard, a gamepad, a leader arm, and more -- โšก Train Action models such as ACT, ฯ€0 or gr00t-n1.5 with one click -- ๐Ÿฆพ Compatible with the SO-100, SO-101, Unitree Go2, Agilex Piper... -- ๐Ÿšช Dev-friendly API -- ๐Ÿค— Fully compatible with LeRobot and HuggingFace -- ๐Ÿ–ฅ๏ธ Runs on macOS, Linux and Windows -- ๐Ÿฅฝ Meta Quest app for teleoperation -- ๐Ÿ“ธ Supports most cameras (classic, depth, stereo) -- ๐Ÿ”Œ Open Source: [Extend it with your own robots and cameras](https://github.com/phospho-app/phosphobot/tree/main/phosphobot) - -Pssst... working with phosphobot? Get expert help from the team. Contact us at [contact@phospho.ai](mailto:contact@phospho.ai) - - -# Installation - -In a terminal, run the following command: - - - -Then, fire up the the server: - -```bash -phosphobot run -``` - -It can take up to 15 seconds for the server to start. - -Go to `localhost` in your web browser to access the phosphobot dashboard. Go to **Keyboard Control** to control your robot using only your keyboard, record datasets, and train AI models! - -# VR Control - -phosphobot enables you to [control your robot arm in VR](./examples/teleop.mdx) using a Meta Quest 2, Pro, 3 or 3s. Speed up your data collection and unlock intuitive bimanual control. - - - - - - - -# How to update phosphobot? - -[We're shipping updates daily](https://github.com/phospho-app/phosphobot/releases). We fix bugs, add features, and improve the experience constantly. - -Keep your software up-to-date to benefit from the latest improvements. - - - -```bash macOS -brew update && brew upgrade phosphobot -``` - -```bash Linux -sudo apt update && sudo apt install --only-upgrade phosphobot -``` - -```powershell Windows -powershell -ExecutionPolicy ByPass -Command "irm https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.ps1 | iex" -``` - -```uv (pip) -uvx phosphobot@latest run -``` - - - - -# Next step - - - - - How to install phosphobot on your computer - - - How to get started with the **phospho starter pack** - - - How to set up your SO-100 or SO-101 robot arm - - - Join the Discord to ask questions, chat with roboticists, and get updates about AI robotics - - \ No newline at end of file diff --git a/models/classify/index.html b/models/classify/index.html new file mode 100644 index 0000000..81e664c --- /dev/null +++ b/models/classify/index.html @@ -0,0 +1,2498 @@ + + + + + + + + + + + + + + + + + + + + Classification - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

+ Request access to the preview by contacting us at contact@phospho.ai +

+

phospho can handle all the data processing, data engineering and model training for you. +For now, only binary classification models are supported (learn here what binary classification is).

+

Why train your custom classification model?

+

Most LLM chains involve classification steps where the LLM is prompted with a classification task. +Training your own classification model can help you to:

+
    +
  • improve the accuracy of the classification
  • +
  • reduce the latency of the classification (as you have the model running in the application code)
  • +
  • reduce the cost of the classification (as you don't have to call an external LLM API)
  • +
  • reduce risks of downtime (as you don't depend on an external LLM API)
  • +
+

Available models

+

phospho-small is a small text classification model that can be trained with a few examples (minimum 20 examples). +It runs on CPU and once trained using phospho, you can download your trained model from Hugging Face.

+

Train a model on your data

+

To train a model, you need to provide a list of examples for the modelat least 20 examples containing text, labels and a label description. +Each example should have the following fields:

+
    +
  • text (str): the text to classify (for example, a user message)
  • +
  • label (bool): True or False according to the classification
  • +
  • label_text (str): a few word description of the label when true (for example, "user asking for pricing")
  • +
+

For example, your examples could look like this:

+
[
+    {
+      "text": "Can I have a discount on phospho pro?",
+      "label": true,
+      "label_text": "user asking for pricing"
+    },
+    {
+      "text": "I want to know more about phospho pro",
+      "label": false,
+      "label_text": "user asking for pricing"
+    },
+    ...
+  ]
+
+

Start the training using the following API call or python code snippet:

+
+
+
+

```bash HTTP API

+

curl -X 'POST' \ + 'https://api.phospho.ai/v2/train' \ + -H 'accept: application/json' \ + -H 'Authorization: Bearer $PHOSPHO_API_KEY' \ + -H 'Content-Type: application/json' \ + -d '{ + "model": "phospho-small", + "examples": [ + { + "text": "How much is phospho pro?", + "label": true, + "label_text": "user asking for pricing" + }, + { + "text": "I want to know more about phospho pro", + "label": false, + "label_text": "user asking for pricing" + }, + ... + ], + "task_type": "binary-classification" +}' +```

+
+
+
import phospho
+
+phospho.init()
+
+my_examples = [
+    {
+      "text": "How much is phospho pro?",
+      "label": True,
+      "label_text": "user asking for pricing"
+    },
+    {
+      "text": "I want to know more about phospho pro",
+      "label": False,
+      "label_text": "user asking for pricing"
+    },
+    ...
+  ]
+
+model = phospho.train("phospho-small", my_examples)
+
+print(model)
+
+
+
+
+

You will get a model object in the response. You will need the model_id to use the model. It should look like this: phospho-small-8963ba3.

+
{
+  "id": "YOUR_MODEL_ID",
+  "created_at": 1714418246,
+  "status": "training",
+  "owned_by": "YOUR_ORG_ID",
+  "task_type": "binary-classification",
+  "context_window": 514
+}
+
+

The training will take a few minutes. You can check the status of the model using the following API call:

+
+
+
+
curl -X 'GET' \
+  'https://api.phospho.ai/v2/models/YOUR_MODEL_ID' \
+  -H 'accept: application/json' \
+  -H 'Authorization: Bearer $PHOSPHO_API_KEY'
+
+

```python Python +import requests +import os

+

model_id = "YOUR_MODEL_ID" # model["id"] if you run the above code +url = f"https://api.phospho.ai/v2/models/{model_id}"

+

headers = {"accept": "application/json", + "Content-Type": "application/json", + "Authorization": f"Bearer {os.environ['PHOSPHO_API_KEY']}" + }

+

response = requests.get(url, headers=headers)

+

print(response.text) +```

+
+
+
+

Your model will be ready when the status will changed from training to trained.

+

Use the model

+

You can use the model 2 ways:

+
    +
  • directly download it from Hugging Face (phospho-small runs on CPU)
  • +
  • through the phospho API
  • +
+ +

You can download the model from phospho Hugging Face repo. The model id is the same as the one you got when training the model.

+

For example, if the model id is phospho-small-8963ba3, you can download the model from Hugging Face with the id phospho-app/phospho-small-8963ba3.

+

Then you can use the model like any other Hugging Face model:

+
from setfit import SetFitModel
+
+model = SetFitModel.from_pretrained("phospho-app/phospho-small-8963ba3")
+
+outputs = model.predict(["This is a sentence to classify", "Another sentence"])
+
+

Make sure to have enough RAM to load the model and the tokenizer in memory. The model is 420MB.

+

Use the model through the API

+

+ {" "} + AI Models predict endpoints are in preview and not yet ready for production trafic. +

+

To use the model through the API, you need to send a POST request to the /predict endpoint with the model id and the batch of text to classify. +If it's the first request you send, you might experience a delay as the model is loaded in memory.

+
+
+
+
+
+

```bash API

+

curl -X 'POST' \ + 'https://api.phospho.ai/v2/predict' \ + -H 'accept: application/json' \ + -H 'Authorization: Bearer $PHOSPHO_API_KEY' \ + -H 'Content-Type: application/json' \ + -d '{ + "inputs": [ + "Can I have a discount on phospho pro?" + ], + "model": "YOUR_MODEL_ID" + }' +

```python Python
+# Coming soon!
+

+

List your models

+

You can also list all the models you have have access to and that can accept requests:

+
+
+
+

```bash HTTP API

+

curl -X 'GET' \ + 'https://api.phospho.ai/v2/models' \ + -H 'accept: application/json' \ + -H 'Authorization: Bearer $PHOSPHO_API_KEY' +

```python Python
+# Coming soon!
+

+
+
+
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/models/embeddings/index.html b/models/embeddings/index.html new file mode 100644 index 0000000..d2060af --- /dev/null +++ b/models/embeddings/index.html @@ -0,0 +1,2458 @@ + + + + + + + + + + + + + + + + + + + + + + + + Intent Embeddings - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +
+

Note

+
+

This model is in preview. Contact us for production or latency sensitive +specs.

+

You can generate embeddings for text using the intent-embed model. Intent Embed is a mdoel that generates embeddings for text, specifically to represent the user intent. Potential use cases include:

+
    +
  • User Intent classification
  • +
  • Intent similarity
  • +
  • Out of topic exclusion
  • +
  • Intent clustering and analytics
  • +
  • And more
  • +
+

Read the technical paper here: Phospho Intent Embeddings.

+

Requirements

+

Create an account on phospho.ai and get your API key. +You need to have setup a billing method. You can add a it in the Settings of your dashboard here.

+

Usage

+

Using the OpenAI client

+

The phospho embedding endpoint is OpenAI compatible. You can use the OpenAI client to send requests to the phospho API.

+
from openai import OpenAI
+
+client = OpenAI(
+    api_key="YOUR_PHOSPHO_API_KEY",
+    base_url="https://api.phospho.ai/v2",
+)
+
+response = client.embeddings.create(
+    model="intent-embed",
+    input="I want to use the phospho intent embeddings api",
+    encoding_format="float",
+)
+
+print(response)
+
+

For now, the input must be a single string. Passing more than one string will result in an error.

+

Using the API directly

+

To send a request, add:

+
    +
  • text: The text to embed, usually a user query or message.
  • +
  • model: must be set to intent-embed.
  • +
+

Optionally, to link this embedding to one of your projects, you can specify the following optional parameters:

+
    +
  • project_id: The project id you want to link this embedding to.
  • +
+
+
+
+
+
+
curl -X 'POST' \
+  'https://api.phospho.ai/v2/embeddings' \
+  -H 'accept: application/json' \
+  -H 'Authorization: Bearer YOUR_PHOSPHO_API_KEY' \
+  -H 'Content-Type: application/json' \
+  -d '{
+  "input": "Your text to embed here",
+  "model": "intent-embed"
+}'
+
+
+
+
+
+
+
import requests
+
+url = 'https://api.phospho.ai/v2/predict'
+headers = {
+    'accept': 'application/json',
+    'Authorization': 'Bearer YOUR_PHOSPHO_API_KEY',
+    'Content-Type': 'application/json'
+}
+data = {
+  "input": "Your text to embed here",
+  "model": "intent-embed"
+}
+
+response = requests.post(url, json=data, headers=headers)
+
+print(response.json()['embeddings'])
+
+

You will get a response with the embeddings for the input text. The embeddings are a list of floats.

+
{
+  "object": "list",
+  "data": [
+    {
+      "object": "embedding",
+      "embedding": [
+        -0.045429688,
+        -0.039863896,
+        0.0077658836,
+        ...],
+      "index": 0
+    }
+  ],
+  "model": "intent-embed",
+  "usage": {
+    "prompt_tokens": 3,
+    "total_tokens": 3
+  }
+}
+
+

These embeddings can stored in vector databases like Pinecone, Milvus, Chroma, Qdrand, etc. for similarity search, clustering, and other analytics.

+

Pricing

+

The pricing is based on the number of tokens in the input text.

+

Note: You need to have a billing method setup to use the model. Acces your billing portal to add one.

+ + + + + + + + + + + + + +
Model namePrice per 1M input tokens
intent-embed$0.94
+
+

Info

+
+

You are billed in \$1 increment.

+

Contact us for high volume pricing.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/models/llm/index.html b/models/llm/index.html new file mode 100644 index 0000000..d30f423 --- /dev/null +++ b/models/llm/index.html @@ -0,0 +1,2451 @@ + + + + + + + + + + + + + + + + + + + + + + + + LLMs - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +
+

Note

+

Access to this feature is restricted. Contact us at contact@phospho.ai to +request access.

+
+

To access any model through the phospho proxy, you need to have a phospho API key and a project on the phospho platform. You can get one by signing up on phospho.ai.

+

To access the Tak API, please refer to the Tak API page.

+

OpenAI

+

The phospho proxy is OpenAI compatible. You can use the OpenAI client to send requests to the phospho API. Messages sent through the phospho proxy will appear in your phospho dashboard.

+

Available models:

+
    +
  • gpt-4o
  • +
  • gpt-4o-mini
  • +
+

To access these models through the phospho proxy, you need to:

+
    +
  • set the base_url to https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/ (instead of https://api.openai.com/v1/)
  • +
  • set the OPENAI_API_KEY to your phospho API key
  • +
  • set the model to the desired model with the prefix openai: ( e.g. openai:gpt-4o or openai:gpt-4o-mini)
  • +
+
+
+
+

```python openai python sdk +import openai

+

from openai import OpenAI +client = OpenAI(api_key="PHOSPHO_API_KEY", base_url="https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/")

+

completion = client.chat.completions.create( + model="openai:gpt-4o", + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Hello!"} + ] +)

+

print(completion.choices[0].message)

+

```

+
+
+

```bash curl +curl https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/chat/completions \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer $PHOSPHO_API_KEY" \ + -d '{ + "model": "openai:gpt-4o", + "messages": [ + { + "role": "system", + "content": "You are a helpful assistant." + }, + { + "role": "user", + "content": "Hello!" + } + ] + }' +

```javascript openai javascript sdk
+// Same as for the python SDK
+

+
+
+
+

Mistral ai

+

The phospho proxy is Mistral ai compatible. You can use the Mistral client to send requests to the phospho API. Messages sent through the phospho proxy will appear in your phospho dashboard.

+

Available models:

+
    +
  • mistral-small-latest
  • +
  • mistral-small-latest
  • +
+

To access these models through the phospho proxy, you need to:

+
    +
  • set the server_url to https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/
  • +
  • set the MISTRAL_API_KEY to your phospho API key
  • +
  • set the model to the desired model with the prefix mistral: ( e.g. mistral:mistral-large-latest or mistral:mistral-small-latest)
  • +
+
+
+
+
+
+
import mistralai
+
+from mistralai import Mistral
+client = Mistral(api_key="PHOSPHO_API_KEY", server_url="https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/")
+
+completion = client.chat.complete(
+  model="mistral:mistral-large-latest",
+  messages=[
+    {"role": "system", "content": "You are a helpful assistant."},
+    {"role": "user", "content": "Hello!"}
+  ]
+)
+
+print(completion.choices[0].message)
+
+
+
+
+
+
+
curl https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/v1/chat/completions \
+  -H "Content-Type: application/json" \
+  -H "Authorization: Bearer $PHOSPHO_API_KEY" \
+  -d '{
+    "model": "mistral:mistral-large-latest",
+    "messages": [
+      {
+        "role": "system",
+        "content": "You are a helpful assistant."
+      },
+      {
+        "role": "user",
+        "content": "Hello!"
+      }
+    ]
+  }'
+
+

javascript mistralai javascript sdk +// Same as for the python SDK

+

Anthropic

+

Docs coming soon.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/models/multimodal/index.html b/models/multimodal/index.html new file mode 100644 index 0000000..2b83404 --- /dev/null +++ b/models/multimodal/index.html @@ -0,0 +1,2397 @@ + + + + + + + + + + + + + + + + + + + + Multimodal LLM - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Enable your LLM app to understand images with the phospho multimodal model. +For optimal performance, this model is not censored or moderated. Ensuring this model is used in a safe way is your responsability.

+

Requirements

+

Create an account on phospho.ai and get your API key. +You need to have setup a billing method. You can add a it in the Settings of your dashboard here.

+

Sending a request

+

To send a request, add:

+
    +
  • text: your text prompt. For instance: "What is this?"
  • +
  • image_url: either a URL of the image or the base64 encoded image data. + The inputs list must be of lenght 1.
  • +
+

Optionally, to better control the generation, you can specify the following optional parameters:

+
    +
  • max_new_tokens (int): default to 200. Max 250. The maximum number of tokens that can be generated in the response.
  • +
  • temperature (float, between 0.1 and 1.0) Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
  • +
  • repetition_penalty (float): Default to 1.15. This parameter helps in reducing the repetition of words in the generated content.
  • +
  • top_p (float, between 0.0 and 1.0): Default to 1.0. This parameter controls the diversity of the response by limiting the possible next tokens to the top p percent most likely.
  • +
+

If you pass a URL, make sure it is a generally available image (for instance by passing the link in a Private Navigation window). +To encode an image in base 64, you can use this website.

+
+
+
+
+
+
curl -X 'POST' \
+  'https://api.phospho.ai/v2/predict' \
+  -H 'accept: application/json' \
+  -H 'Authorization: Bearer YOUR_PHOSPHO_API_KEY' \
+  -H 'Content-Type: application/json' \
+  -d '{
+  "inputs": [{"text": "What is this?", "image_url": "http://images.cocodataset.org/val2017/000000039769.jpg"}],
+  "model": "phospho-multimodal"
+}'
+
+
+
+
+
+
+
import requests
+
+url = 'https://api.phospho.ai/v2/predict'
+headers = {
+    'accept': 'application/json',
+    'Authorization': 'Bearer YOUR_PHOSPHO_API_KEY',
+    'Content-Type': 'application/json'
+}
+data = {
+    "inputs": [{"text": "What is this?", "image_url": "http://images.cocodataset.org/val2017/000000039769.jpg"}],
+    "model": "phospho-multimodal"
+}
+
+response = requests.post(url, json=data, headers=headers)
+
+print(response.json()['predictions'][0]['description'])
+
+
+

Note

+
+

This API endpoint is for preview and not optimal for production scale serving. +Contact us for on premise deployment or high performance endpoints.

+

Pricing

+

The pricing is based on the number of images sent.

+

Note: You need to have a billing method setup to use the model. Acces your billing portal to add one.

+ + + + + + + + + + + + + + + +
Model namePrice per 100 imagesPrice per 1000 images
phospho-multimodal$1$10
+
+

Info

+
+

You are billed in \$1 increment.

+
_Example: if you send 150 images, you will be billed \$2._
+
+

Contact us for high volume pricing.

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/models/tak/index.html b/models/tak/index.html new file mode 100644 index 0000000..0d40ff1 --- /dev/null +++ b/models/tak/index.html @@ -0,0 +1,2537 @@ + + + + + + + + + + + + + + + + + + + + + + + + Tak API - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +
+

Note

+

Access to this feature is restricted. Contact us at contact@phospho.ai to +request access.

+
+

Please note that the version available via API is different from the one available online at tak.phospho.ai.

+

To access the API, you need to have a phospho API key and a project on the phospho platform. You can get one by signing up on phospho.ai.

+

The tak API endpoint is OpenAI compatible. You can use the OpenAI client to send requests to the tak API. Messages sent will appear in your phospho dashboard.

+

Available models:

+
    +
  • tak-large: leverages GPT-4o, can search the web and the news.
  • +
+

Capabilities

+

Tak can search the web and the news to provide up to date information on a wide range of topics. +It can also perform standard LLM tasks such as summarization, translation, and question answering. +Answers are formated in markdown and contains the sources of the information (link in Markdown format).

+

Tak can handle tasks requiring multiple web searches in a single query, such as: What is NVIDIA current stock price? And what is Apple stock price?

+

Streaming is supported.

+

Limits

+

The default rate limit is 500 requests per minute. The maximum context window is 128k tokens.

+

Sending requests

+

To send requests, you need to:

+
    +
  • set the base_url to https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/ (instead of https://api.openai.com/v1/)
  • +
  • set the OPENAI_API_KEY to your phospho API key
  • +
  • set the model to the desired model to phospho:tak-large
  • +
  • no need to specify a system message. If you add one, it won't be followed.
  • +
+
+
+
+
import openai
+
+from openai import OpenAI
+client = OpenAI(api_key="PHOSPHO_API_KEY", base_url="https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/")
+
+completion = client.chat.completions.create(
+  model="phospho:tak-large",
+  messages=[
+    {"role": "user", "content": "What are the latest AI news in France?"}
+  ]
+)
+
+print(completion.choices[0].message)
+
+# Or with streaming
+
+response = client.chat.completions.create(
+    model='phospho:tak-large',
+    messages=[
+        {'role': 'user', 'content': "Count to 10"}
+    ],
+    temperature=0,
+    stream=True  # this time, we set stream=True
+)
+
+for chunk in response:
+    print(chunk.choices[0].delta.content, end="", flush=True)
+
+
+
+
curl https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/chat/completions \
+  -H "Content-Type: application/json" \
+  -H "Authorization: Bearer $PHOSPHO_API_KEY" \
+  -d '{
+    "model": "openai:gpt-4o",
+    "messages": [
+      {
+        "role": "user",
+        "content": "What are the latest AI news in France?"
+      }
+    ]
+  }'
+
+
+
+

javascript openai javascript sdk +// Same as for the python SDK

+
+
+
+

Pricing

+

The pricing is based on the number of tokens in input messages and output completion.

+

Note: You need to have a billing method setup to use the model. Acces your billing portal to add one.

+ + + + + + + + + + + + + + + +
Model namePrice per 1M input tokensPrice per 1M output tokens
tak-large$5$20
+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/phospho-mkdocs/.python-version b/phospho-mkdocs/.python-version deleted file mode 100644 index 2c07333..0000000 --- a/phospho-mkdocs/.python-version +++ /dev/null @@ -1 +0,0 @@ -3.11 diff --git a/phospho-mkdocs/README.md b/phospho-mkdocs/README.md deleted file mode 100755 index 9eec8b2..0000000 --- a/phospho-mkdocs/README.md +++ /dev/null @@ -1,9 +0,0 @@ -# mkdocs - -Github pages docs of phospho - -## Run locally - -``` -sudo uv run mkdocs serve -``` diff --git a/phospho-mkdocs/docs/analytics/ab-test.md b/phospho-mkdocs/docs/analytics/ab-test.md deleted file mode 100644 index 9c3cf7f..0000000 --- a/phospho-mkdocs/docs/analytics/ab-test.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: AB Testing -description: "Run AB tests in your app to see which version performs better" ---- - -AB testing lets you compare different versions of your app to see which one performs better. - -![AB tests](../images/explore/abtest.jpeg) - -## What is AB testing - -AB testing is a method used to compare two versions of a product to determine which performs better. - -Comparing on a single criteria is hard, especially for LLM apps. Indeed, the performance of a product can be measured in many ways. - -In phosho, the way AB testing is done is by comparing the **[analytics distribution](/docs/guides/events) of two versions**: the candidate one and the control one. - -## Prerequisites to run an AB test - -You need to have setup [event detection](/docs/guides/events) in your project. This will run analytics to measure the performance of your app: - -- **Tags:** eg. topic of the conversation -- **Scores:** eg. sentiment of the conversation (between 1 and 5) -- **Classifiers:** eg. user intent ("buy", "ask for help", "complain") - -## Run an AB test from the platform - -1. Click on the button "Create an AB test" on the phospho platform. If you want, customize the `version_id`, which is the name of the test. - -2. Send data to the platform [by using an SDK, an integration, a file, or more](/docs/getting-started). All new incomming messages will be tagged with the `version_id`. - -## Alternative: Specify the `version_id` in your code - -Alternatively, you can specify the `version_id` in your code. This will override the `version_id` set in the platform. - -When logging to phospho, add a field `version_id` with the name of your version in `metadata`. See the example below: - -=== "Python" - - ```python - log = phospho.log( - input="log this", - output="and that", - version_id="YOUR_VERSION_ID" - ) - ``` - -=== "Javascript" - - ```javascript - log = phospho.log({ - input: "log this", - output: "and that", - version_id:"YOUR_VERSION_ID", - }); - ``` - -=== "API" - - ```bash - curl -X POST https://api.phospho.ai/v2/log/$PHOSPHO_PROJECT_ID \ - -H "Authorization: Bearer $PHOSPHO_API_KEY" \ - -H "Content-Type: application/json" \ - -d '{ - "batched_log_events": [ - { - "input": "your_input", - "output": "your_output" - "metadata": { - "version_id": "YOUR_VERSION_ID" - } - } - ] - }' - ``` - -## Run offline tests - -If you want to run offline tests, you can use the [phospho command line interface](/docs/docs/cli). Results of the offline tests are also available in the AB test tab. - -
- -- :octicons-terminal-16:{ .lg .middle } __phospho CLI__ - - --- - - Learn more about the phospho command line interface - - [:octicons-arrow-right-24: Read more](#) - -
diff --git a/phospho-mkdocs/docs/analytics/clustering.md b/phospho-mkdocs/docs/analytics/clustering.md deleted file mode 100644 index 2acc592..0000000 --- a/phospho-mkdocs/docs/analytics/clustering.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: Clustering -description: "Group users messages based on their intention" ---- - -Clustering lets your group user messages based on their intention. This is great to get a feeling of "what are my users talking about?" and to identify the most common topics. - -![Clustering](../images/clustering-demo.gif) - -## How it works - -The phospho clustering uses a combination of **user intent embedding** and **unsupervized clustering algorithms** to group messages together. - -The user intent embedding is a representation of the user intention in a high dimensional space. This representation is generated using a deep learning model trained on a large dataset of user messages. [Learn more here.](https://research.phospho.ai/phospho_intent_embed.pdf) - -We are constantly evaluating and improving the clustering algorithms to provide the best results. - -## How to run a clustering - -To use the clustering feature, you need to have a phospho account and an API key. You can get one by signing up on [phospho.ai](https://platform.phospho.ai). - -1. **Import data**. If not already done, [import your data](/docs/import-data/import-file) and setup a payment method. - -2. **Configure clustering**. Go to the **Clusters** tab and click on the *Configure clustering detection* button. - Select the scope of data to cluster: either messages or sessions. - Filter the data by setting a date range, a specific tag, and more. - -3. **Run clustering**. - Click on the *Run cluster analysis* button to start the clustering process. Depending on the number of messages, it can take a few minutes. - - - -## How to interpret the results - -The clustering results are presented in two formats: - -- **3D Dot Cloud Graph**: Each point in the graph corresponds to an embedding of a message (or a session). Clusters are distinct groups of these points. - -- **Cluster Cards**: Each cluster is also displayed as a card. The card shows the cluster size and an automatic summary of a random sample of messages. Click on "Explore" in any card to view the messages in the cluster. - -## How to run a clustering with a custom instruction? - -By default, the clustering is run based on: `user intent` - -You can however modify this instruction in *Advanced settings*. - -Change the clustering instruction to refine how messages are grouped, to provide insights that are more aligned with your needs. You just need to enter the **topic** you want to cluster on. - -Examples of what you can enter: -- For a medical chatbot: `type of disease` -- For a customer support chatbot: `type of issue (refund, delivery, etc.)` -- For a chatbot in the e-commerce industry: `product mentioned` - -## How to run a custom clustering algorithms? - -You can use the user intent embeddings to run your own clustering algorithms. The embeddings are available through the API. [Learn more here.](/docs/models/embeddings) - -## Next steps - -Based on the clusters, define more analytics to run on your data in order to never miss a beat on what your users are talking about. Check the [event detection page](/docs/analytics/events) for more information. diff --git a/phospho-mkdocs/docs/analytics/evaluation.md b/phospho-mkdocs/docs/analytics/evaluation.md deleted file mode 100644 index a32daf2..0000000 --- a/phospho-mkdocs/docs/analytics/evaluation.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: Automatic evaluation -description: 'Evaluate and score your LLM app' ---- - -phospho enables you to evaluate the quality (success or failure) of the interactions between your users and your LLM app. - -Every time you log a task, phospho will **automatically evaluate** the success of the task. - -## How does phospho evaluate tasks? - -The evaluation is based on LLM self-critique. - -The evaluation leverages the following sources of information: -- The tasks annotated in the phospho webapp, by **you and your team** -- The **user feedbacks** sent to phospho -- The `system_prompt (str)` parameter in `metadata` when logging -- Previous tasks in the **same session** - -If the information are not available, phospho will use default heuristics. - -## How to improve the automatic evaluation? - -To improve the automatic evaluation, you can: -- Label tasks in the phospho webapp. **Invite** your team members to help you! -- Gather [user feedback](/docs/guides/user-feedback) -- Pass the `system_prompt (str)` parameter in `metadata` when [logging](/docs/guides/sessions-and-users#metadata) -- Group tasks in [sessions](/docs/guides/sessions) -- Override the task evaluations with [the analytics endpoints](/docs/integrations/python/analytics#update-logs-from-a-dataframe) - -## Annotate in the phospho webapp - -In the phospho dashboard, you can annotate tasks as a success or a failure. - -### Thumbs up / Thumbs down - -In the Transcript tab, view tasks to access the thumbs up and thumbs down buttons. -- A thumbs up means that the task was successful. -- A thumbs down means that the task failed. - -Update the evaluation by clicking on the thumbs. - -The button **changes color** to mark that this task was evaluated by a human, and not by phospho. - -### Notes - -Add notes and any kind of text with the **Notes** button next to the thumbs. - -If there is a note already written, the color of the button changes. - -## Annotate with User feedback - -You can gather annotations any way you want. For example, if you have your own tool to collect feedback (such as thumbs up/thumbs down in your chat interface), you can chose to use the phospho API. - -Trigger [the API endpoint](https://api.phospho.ai/v2/redoc#tag/Tasks/operation/post_flag_task_tasks__task_id__flag_post) to send your annotations to phospho at scale. - -Read the [full guide about user feedback](/docs/guides/user-feedback) to learn more. - -## Visualize the results - -Visualize the aggregated results of the evaluations in the _Dashboard_ tab of the phospho webapp. - -You can also visualize the results for each task in the _Sessions_ tab. Click on a session to see the list of tasks in the session. - -A green thumbs up means that the task was successful. A red thumbs down means that the task failed. Improve the automatic evaluation by clicking on the thumbs to annotate the task if needed. \ No newline at end of file diff --git a/phospho-mkdocs/docs/analytics/events.md b/phospho-mkdocs/docs/analytics/events.md deleted file mode 100644 index f664d09..0000000 --- a/phospho-mkdocs/docs/analytics/events.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Event detection -description: "Events are the key to getting insights from your data in phospho" ---- - -Learn how to define and run events in phospho, and also how they work under the hood and how to improve them. - -## What are events in phospho? - -Events are **actions** or **behaviours** that you want to track in your data. There are three types of events: - -- **Tags**: Tags are detected in the data and can be used to filter data. Tags are described in natural language. Tags are either present, or not present in a message. -- **Scores**: Scores are values between 1 and 5 that are assigned to a message. Scores can be used to track the quality of the conversation. -- **Categories**: Categories are the result of a classification. Use categories to classify messages in different classes. For example, if you have a set of user intents, you can classify messages in these intents. - -## Create an event - -An event is a specific interaction between a user and the system you want to track. - -To define an event, go to the **Events** tab in the phospho platform and click on the **Add** button. - -![Add Event](../images/guides/getting_started/add_event.png) - -In this tab you can setup events in natural language, in this image, we have setup an event to detect when the system is unable to answer the user's question. - -By default, events are detected on all the newly imported data, but not on the past data. You need to run the events on the past data to get insights. - -## Run events on imported data - -Once you've defined your events, you need to run them on past data. - -Click on the Detect events button in the **Events** tab to run an event on your data. - -![Detect events](../images/guides/getting_started/detect_events.png) - -## How are events detected? - -Every message logged to phospho goes through an analytics pipeline. In this pipeline, phospho looks for **events** defined in your project settings. - -This pipeline uses a combination of **rules**, **machine learning**, and **large language models** to detect events. The rules are defined in the **Analytics** tab of the phospho dashboard. - -## How good is the event detection? - -To help you keep track and improve the event detection, phospho enables you **annotate** and **validate** the events detected in your data. - -Click on an event in the **Transcripts** to annotate it. This will display a dropdown where you can validate, remove or edit the event. - -Advanced **performance metrics** (F1 Score, Accuracy, Recall, Precision, R-squared, MSE) are available when you click on an event in the Analytics tab. - -## Automatic improvement of the event detection - -The event detection models are **automatically improved** and updated using **your feedback.** - - -The more you annotate and validate the events on the platform, the better the events become ! - - -Click on an event in the **Transcripts** to annotate it. This displays a dropdown where you can validate, remove or edit the event. - -We are constantly improving our algorithms to provide the best results. We're an open source project, so feel free to open an issue on our [GitHub](https://github.com/phospho-app/phospho/issues) or contribute to the codebase. We would love to hear from you! \ No newline at end of file diff --git a/phospho-mkdocs/docs/analytics/fine-tuning.md b/phospho-mkdocs/docs/analytics/fine-tuning.md deleted file mode 100644 index 6173442..0000000 --- a/phospho-mkdocs/docs/analytics/fine-tuning.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: Event Fine-tuning -description: "phospho enables you to fine-tune an LLM to detect specific events." ---- - - - LLM fine-tuning for event detection is in Alpha. Contact us to request access. - - -## Preparing your data - -To fine-tune a model for event detection, you need to prepare a `csv` dataset that contains the following columns: - -- `detection_scope` (`Literal`): can only be one of the following values: `task_input_only` or `task_output_only` -- `task_input` (`str`): the input text for a task (uusually the user input) -- `task_output` (`str`): the output text for a task (usually the assistant response) -- `event_description` (`str`): the event description, like the prompt you use to define the event you want to dectect while using phospho -- `label` (`bool`): True if the event is indeed present in the text, False otherwise - -A good dataset size is at least 2000 examples. - -## Uploading the dataset to phospho - -To upload the dataset to phospho, use directly the API. Don't forget to set your API key in the `Authorization` header. - -```bash -curl -X 'POST' \ - 'https://api.phospho.ai/v2/files' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' \ - -H 'Content-Type: multipart/form-data' \ - -F 'file=@/path/to/your/local/file.csv.csv;type=text/csv' -``` - -Keep the `file_id` returned by the API, you will need it to fine-tune the model. - -## Launching the fine-tuning - -We recomend using the `mistralai/Mistral-7B-Instruct-v0.1` model for event detection. -Once the dataset is uploaded, you can fine-tune the model using the following API call: - -```bash -curl -X 'POST' \ - 'https://api.phospho.ai/v2/fine_tuning/jobs' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' \ - -H 'Content-Type: application/json' \ - -d '{ - "file_id": "YOUR_FILE_ID", - "parameters": {"detection_scope": "YOUR_DETECTION_SCOPE", "event_description": "YOUR EVENT DESCRIPTION HERE"}, - "model": "mistralai/Mistral-7B-Instruct-v0.1" -}' -``` - -Note the fine-tuning id returned by the API, you will need it to check the status of the job. It should take approximately 20 minutes to complete. - -The finetuning job will take some time to complete. You can check the status of the job using the following API call: - -```bash -curl -X 'GET' \ - 'https://api.phospho.ai/v2/fine_tuning/jobs/FINE_TUNING_JOB_ID' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' -``` - -When the fine-tuning job is completed, you can get the fine-tuned model id in the `fine_tuned_model` field of the response. - -## Using the fine-tuned model for your event detection - -You can now use the fine-tuned model to detect events in your text. To do so, update the configs. - -First, get your current project settings: - -```bash -curl -X 'GET' \ - 'https://api.phospho.ai/v2/projects/YOUR_PROJECT_ID' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' -``` - - - The POST request will overwrite the current project settings. Make sure to - include all the settings you want to keep in the new settings object. - - -In the settings object, add (or change) the `detection_engine` to the `fine_tuned_model` id you got from the fine-tuning job. Then, update the project settings: - -```bash -curl -X 'POST' \ - 'https://api.phospho.ai/v2/projects/YOUR_PROJECT_ID' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' \ - -H 'Content-Type: application/json' \ - -d '{ - "settings": YOUR_UPDATED_SETTINGS_OBJECT -}' -``` - -You're all set! You can now use the fine-tuned model to detect events in your text. diff --git a/phospho-mkdocs/docs/analytics/language.md b/phospho-mkdocs/docs/analytics/language.md deleted file mode 100644 index dbcc006..0000000 --- a/phospho-mkdocs/docs/analytics/language.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Language Detection -description: "Detect the language of your users" ---- - -Detect what language your users are speaking in. This lets you analyze in what language your users are interacting with your assistant, and improve it accordingly. - -Language detection is based on the **user message**, so the interaction below will be flagged as english, despite the assistant answering in French. - -| User | Assistant | -|-------------------|------------------------------------| -| What can you do? | Je ne peux pas rรฉpondre en anglais | - -The language detection method is based on keywords. If the input is very short, the language detection might not be accurate. - -In the Transcripts, you can **filter** by language. \ No newline at end of file diff --git a/phospho-mkdocs/docs/analytics/sentiment-analysis.md b/phospho-mkdocs/docs/analytics/sentiment-analysis.md deleted file mode 100644 index 940d036..0000000 --- a/phospho-mkdocs/docs/analytics/sentiment-analysis.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Sentiment Analysis -description: "Rate the sentiment of your users" ---- - -Detect the sentiment of your users. An automatic sentiment analysis is performed on the **user message**. This lets you know whether your users are happy, sad, or neutral. - -The sentiment and its magnitude are score. This corresponds to a **negative or positive sentiment** and how **strong** it is. - -We then translate this data into a simple, readable label for you: **Positive, Neutral, Mixed and Negative**. - -- **Positive**: The sentiment score is greater than **0.3** -- **Neutral**: The sentiment score is between **-0.3** and **0.3** -- **Mixed**: The sentiment score is between **-0.3** and **0.3** but the magnitude is **greater than 0.6** -- **Negative**: The sentiment score is less than **-0.3** - -You can also **filter** your data by sentiment in the Transcripts. \ No newline at end of file diff --git a/phospho-mkdocs/docs/analytics/sessions-and-users.md b/phospho-mkdocs/docs/analytics/sessions-and-users.md deleted file mode 100644 index 8ea3ffd..0000000 --- a/phospho-mkdocs/docs/analytics/sessions-and-users.md +++ /dev/null @@ -1,194 +0,0 @@ ---- -title: Sessions and Users -description: "Group tasks together into sessions and attach them to users." ---- - -A **task** is a single operation made by the user. _For example, a user sending a question to ChatGPT and receiving an answer is a task._ - -A **session** groups multiple tasks that happen in the same context. _For example, multiple messages in the same ChatGPT chat is a session._ - -A **user** is the end user of your LLM app. _For example, the human chatting with ChatGPT._ - -!!! info - **Tasks, sessions and users are just abstractions.** They are meant to help you understand the context of a log. You can use them as you want. - - For example, - - A task can be _"Fetch documents in a database"_ for a RAG. - - A session can be _"The code completions in a single file"_ for a coding copilot. - - A user can be _"The microservice querying the API"_ for a question answering model. - - -## Tasks - -### Inputs and Outputs - -A **task** is made of an `input` and an **optional** `output`, which are text readable by humans. Think of them as the messages in a chat. - -On top of that, you can pass a `raw_input` and a `raw_output`. Those are the raw data that your LLM app received and produced. They are mostly meant for the developers of your LLM app. - -### Metadata - -To help you understand the context of a task, you can pass a **metadata** dict to your tasks. - -For example, the version of the model used, the generation time, the system prompt, the user_id, etc. - -=== "Python" - - ```python - import phospho - - phospho.init() - - phospho.log( - input="What is the meaning of life?", - output="42", - #ย Metadata - raw_input={"chat_history": ...}, - metadata={ - "system_prompt": "You are a helpful assistant.", - "version_id": "1.0.0", - "generation_time": 0.1, - }, - ) - ``` - -=== "Javascript" - - ```javascript - import { phospho } from "phospho"; - - phospho.init(); - - phospho.log({ - input: "What is the meaning of life?", - output: "42", - // Metadata - raw_input={"chat_history": ...}, - metadata={ - "system_prompt": "You are a helpful assistant.", - "version_id": "1.0.0", - "generation_time": 0.1, - }, - }); - ``` - - -The metadata is a dictionary that can contain any key-value pair. We recommend to stick to str keys and str or float values. - -Note that the output is optional, but the input is required. - -#### Special metadata keys - -- `system_prompt`: The prompt used to generate the output. It will be displayed separately in the UI. -- `version_id`: The version of the app. Used for [AB testing](/docs/analytics/ab-tests). -- `user_id`: The id of the user. Used for [user analytics](#users). - - -### Tasks are not just calls to LLMs - -A task can be a call to a LLM. But it can also be something completely different. - -For example, a task can be a call to a database, or the result of a complex chain of thought. - -Tasks are an abstraction that you can use as you want. - -### Task Id - -By default, when logging, a task id is automatically generated for you. - -Generating your own task id is useful to attach user feedback later on (on this topic, see [User Feedback](/docs/guides/user-feedback)). - -## Sessions - -If you're using phospho in a conversational app such a chatbot, group tasks together into sessions. - -- Sessions are easier to read for humans. -- They improve evaluations and event detections by providing context. -- They help you understand the user journey. - -### Session Id - -To create sessions, pass a `session_id` when logging. - -The session id can be any string. However, we recommend to use a UUID generated by a random hash function. We provide a helper function to generate a session id. - -=== "Python" - - ```python - session_id = phospho.new_session() - - phospho.log( - input="What is the meaning of life?", - output="42", - session_id=session_id, - ) - ``` - -=== "Javascript" - - ```javascript - const sessionId = phospho.newSession(); - - phospho.log({ - input: "What is the meaning of life?", - output: "42", - sessionId: sessionId, - }); - ``` - -=== "Langchain" - - ```python - import phospho - from phospho.integrations import PhosphoLangchainCallbackHandler - - session_id = phospho.new_session() - - response = retrieval_chain.invoke( - "Chain input", - config={"callbacks": [ - # Pass the session_id to the callback - PhosphoLangchainCallbackHandler(session_id=session_id) - ]} - ) - ``` - - -### Session insights - -Sessions are useful for insights about short term user behavior. -- Monitor for how long a user chats with your LLM app before disconnecting -- Compute the average number of messages per session -- Discover what kind of messages ends a session. - -## Users - -Find out how specific users interact with your LLM app by logging the user id. - -To do so, attach tasks and sessions to a `user_id` when logging. The user id can be any string. - -=== "Python" - - ```python - phospho.log( - input="What is the meaning of life?", - output="42", - user_id="roger@gmail.com", - ) - ``` - -=== "Javascript" - - ```javascript - phospho.log({ - input: "What is the meaning of life?", - output: "42", - user_id: "roger@gmail.com", - }); - ``` - -User analytics are available in the tabs Insights/Users. -- Discover aggregated metrics (number of tasks, average session duration, etc.) -- Access the tasks and sessions of a user by clicking on the corresponding row. - -Monitoring users helps you discover power users of your app, abusive users, or users who are struggling with your LLM app. \ No newline at end of file diff --git a/phospho-mkdocs/docs/analytics/tagging.md b/phospho-mkdocs/docs/analytics/tagging.md deleted file mode 100644 index 525fefa..0000000 --- a/phospho-mkdocs/docs/analytics/tagging.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: Automatic tagging -description: "phospho automatically tags your data and warns you when needed" ---- - -## How are tags detected? - -Every message logged to phospho goes through an analytics pipeline. In this pipeline, phospho looks for **tags** defined in your project settings. - -Tags are described in **natural language**. Create tags to detect topics, hallucinations, behaviours, intents, or any other concept you want to track. - -Tags are displayed on the platform and you can use them to filter data. - -Be notified when a tag is detected with **webhooks**. - -### Example of tags - -- The user is trying to book a flight -- The user thanked the agent for its help -- The user is asking for a refund -- The user bought a product -- The assistant responded something that could be considered financial advice -- The assistant talked as if he was a customer, and not a support - -## Create tags - -Go to the **Analytics** tab of the [phospho dashboard](https://platform.phospho.ai/), and click Add Tagger on the right. - -You will find some event templates like Coherence and Plausibility to get you started. - -![Events tab](../images/explore/events%20detection/Create%20event.png) - - -### Tag definition - -The event description is a natural language description of the tag. Explain how to detect the tag in an interaction as if you were explaining it to a 5 years old or an alien. - -In the description, refer to your user as "the user" and to your LLM app as "the assistant". - -!!! example "Example of an event description" - > _The user is trying to book a flight. The user asked a question about a flight. - Don't include fight suggestions from the agent if the user didn't ask for it._ - -Manage Tags in the **Analytics** tab. Click delete to delete a tag detector. - -### Tag suggestion - -Note that you can also use the magic wand button on any session to get a suggestion for a possible tag that has been detected in the session. - -![Tag suggestion](../images/explore/events%20detection/Event%20suggestion.png) - -The button is right next to "Events" in the Session tab. - -## Webhooks - -Add an optional webhook to be notified when an event is detected. Click on **Additional settings** to add the webhook URL and the eventual Authorization header. - -### What is a webhook? - -Webhooks are automated messages sent from apps when something happens. They have a payload and are sent to a unique URL, which is like an app's phone number or address. - -If you have an LLM app with a backend, you can create webhooks. - -### How to use the webhook? - -Every time the event is detected, phospho will send a `POST` request to the webhook with this payload: - -```json -{ - "id": "xxxxxxxxx", // Unique identifier of the detected event - "created_at": 13289238198, // Unix timestamp (in seconds) - "event_name": "privacy_policy", // The name of the event, as written in the dashboard - "task_id": "xxxxxxx", // The task id where the event was detected - "session_id": "xxxxxxx", // The session id where the event was detected - "project_id": "xxxxxxx", // The project id where the event was detected - "org_id": "xxxxxxx", // The organization id where the event was detected - "webhook": "https://your-webhook-url.com", // The webhook URL - "source": "phospho-unknown", // Starts with phospho if detected by phospho -} -``` - -Retrieve the messages using the `task_id` and the [phospho API.](https://api.phospho.ai/v2/redoc#tag/Tasks/operation/get_task_tasks__task_id__get) - -### Examples - -Use webhooks to send slack notifications, emails, SMS, notifications, UI updates, or to trigger a function in your backend. \ No newline at end of file diff --git a/phospho-mkdocs/docs/analytics/usage-based-billing.md b/phospho-mkdocs/docs/analytics/usage-based-billing.md deleted file mode 100644 index 486f8f8..0000000 --- a/phospho-mkdocs/docs/analytics/usage-based-billing.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Usage-based billing -description: "How phospho usage is measured" ---- - -This documents documents the `usage based` billing plan of the hosted phospho platform. - -## What is usage-based billing? - -Every analytics run on phospho consumes a certain amount of credits. - -At the end of the month, the total credits consumed by all the analytics runs are calculated and the user is billed based on the total credits consumed. - -The cost per credit depends on the plan you are on. - -## How many credits does an analytics run consume? - -| Analytics run | Credits consumed | -----------------|------------------| -| Logging 1 Task | 0 | -| Event detection on 1 Task: Tagger | 1 | -| Event detection on 1 Task: Scorer | 1 | -| Event detection on 1 Task: Classifier | 1 | -| Clustering on 1 Task | 2 | -| Event detection on 1 Session: Tagger | 1 * number of tasks in the session | -| Event detection on 1 Session: Scorer | 1 * number of tasks in the session | -| Event detection on 1 Session: Classifier | 1 * number of tasks in the session | -| Clustering on 1 Session | 2 * number of tasks in the session | -| Language detection on 1 Task | 1 | -| Sentiment detection on 1 Task | 1 | - -## How to optimize credit consumption? - -- Instead of using multiple taggers, use a single classifier -- Filter the scope of clustering to only the required tasks -- Disable unnecessary analytics in Project settings - diff --git a/phospho-mkdocs/docs/analytics/user-feedback.md b/phospho-mkdocs/docs/analytics/user-feedback.md deleted file mode 100644 index 46f1100..0000000 --- a/phospho-mkdocs/docs/analytics/user-feedback.md +++ /dev/null @@ -1,288 +0,0 @@ ---- -title: User Feedback -description: How to send the user feedback from your app to phospho? ---- - -Logging user feedback is a crucial part of evaluating an LLM app. Even though user feedback is subjective and biased towards negative, it is a valuable source of information to improve the quality of your app. - -Setup user feedback in your app to log the user feedback to phospho, review it in the webapp, improve the automatic evaluations, and make your app better. - -## Architecture: what's the task_id? - -In your app, you should collect user feedback **after** having logged a task to phospho. Every task logged to phospho is identified by a unique **task_id**. - -For phospho to know what task the user is giving feedback on, you need to keep track of the **task_id**. - -There are two ways to manage the task_id: frontend or backend. - -Any way you chose, there are helpers in the phospho package to make it easier. - -### Option 1: Task id managed by Frontend - -1. In your frontend, you generate a task id using UUID V4 -2. You pass this task id to your backend. The backend executes the task and log the task to phospho with this task id. -3. In your frontend, you collect user feedback based on this task id. - - -### Option 2: Task id managed by Backend - -1. In your frontend, you ask your backend to execute a task. -2. The backend generates a task id using UUID V4, and logs the task to phospho with this task id. -3. The backend returns the task id to the frontend. -4. In your frontend, you collect user feedback based on this task id. - -## Backend: Log to phospho with a known task_id - -=== "Python" - - The phospho package provides multiple helpers to manage the task_id. - - ```python - pip install phospho - ``` - - Make sure you have initialized the phospho package with your project_id and api_key somewhere in your app. - - ```python - import phospho - phospho.init(project_id="your_project_id", api_key="your_api_key") - ``` - - You can fetch the task_id generated by `phospho.log`: - - ```python - logged_content = phospho.log(input="question", output="answer") - task_id: str = logged_content["task_id"] - ``` - - To generate a new task_id, you can use the `new_task` function. - - ```python - task_id: str = phospho.new_task() - - #ย Pass it to phospho.log to create a task with this id - phospho.log(input="question", output="answer", task_id=task_id) - ``` - - To get the latest task_id, you can use the `latest_task_id` variable. - - ```python - latest_task_id = phospho.latest_task_id - ``` - -=== "Javascript" - - The phospho package provides multiple helpers to manage the task_id. - - ```bash - npm install phospho - ``` - - Make sure you have initialized the phospho package with your project_id and api_key somewhere in your app. - - ```javascript - import { phospho } from "phospho"; - phospho.init({ projectId: "your_project_id", apiKey: "your_api_key" }); - ``` - - You can fetch the task_id generated by `phospho.log`: - - ```javascript - const loggedContent = await phospho.log({ - input: "question", - output: "answer", - }); - const taskId: string = loggedContent.task_id; - ``` - - The task_id from the loggedContent is in snake_case. - - To generate a new task_id, you can use the `newTask` function. - - ```javascript - const taskId = phospho.newTask(); - - // Pass it to phospho.log to create a task with this id - phospho.log({ input: "question", output: "answer", taskId: taskId }); - ``` - - To get the latest task_id, you can use the `latestTaskId` variable. - - ```javascript - const latestTaskId = phospho.latestTaskId; - ``` - -=== "API" - - When using the API directly, you need to manage the task_id by yourself. - - Create a task_id by generating a string hash. It needs to be unique for each task. - - ```bash - TASK_ID=$(uuidgen) - ``` - - Pass this task_id to the `log` endpoint. - - ```bash - curl -X POST https://api.phospho.ai/v2/log/$PHOSPHO_PROJECT_ID \ - -H "Authorization: Bearer $PHOSPHO_API_KEY" \ - -H "Content-Type: application/json" \ - -d '{ - "batched_log_events": [ - { - "input": "your_input", - "output": "your_output", - "task_id": "$TASK_ID" - } - ] - }' - ``` - - -## Frontend: Collect user feedback - -Once your backend has executed the task and logged it to phospho with a known **task_id**, send the **task_id** back to your frontend. - -In your frontend, using the **task_id**, you can collect user feedback and send it to phospho. - -=== "React" - - We provide [React components](https://github.com/phospho-app/phospho-ui-react) to kickstart your user feedback collection in your app. - - ```bash - npm install phospho-ui-react - ``` - - ```javascript - import "./App.css"; - import { FeedbackDrawer, Feedback } from "phospho-ui-react"; - import "phospho-ui-react/dist/index.css"; - - function App() { - return ( -
-
- - console.log("Submitted: ", feedback) - } - onClose={(feedback: Feedback) => console.log("Closed: ", feedback)} - /> -
-
- ); - } - - export default App; - ``` - -=== "Web" - - In the browser, use the `sendUserFeedback` function. This function doesn't need your phospho api key. This is done to avoid leaking your phospho API key. However, this function still requires the `projectId`. - - Here is how to use the `sendUserFeedback` function. - - ```javascript - import { sendUserFeedback } from "phospho"; - - // Handle logging in your backend, and send the task_id to the browser - const taskId = await fetch("https://your-backend.com/some-endpoint", { - method: "POST", - headers: { - "Content-Type": "application/json", - }, - body: JSON.stringify({ - your: "stuff", - }), - }) - .then((res) => res.json()) - .then((data) => data.task_id); - - // When you collect feedback, send it to phospho - // For example, when the user clicks on a button - sendUserFeedback({ - projectId: "your_project_id", - tastId: taskId, - flag: "success", // or "failure" - source: "user", - notes: "Some notes (can be None)", - }); - ``` - -=== "Other" - - If you are using a different language or a different way to manage the frontend, you can use the API endpoint `tasks/{task-id}/flag` directly. - - This endpoint is public. You only need to pass the task_id and project_id. This is done to avoid leaking your phospho API key. - - ```bash - curl -X POST https://api.phospho.ai/v2/tasks/$TASK_ID/flag \ - -H "Content-Type: application/json" \ - -d '{ - "project_id": "$PHOSPHO_PROJECT_ID", - "flag": "success", - "flag_source": "user" - "notes": "This is what the user said about this task" - }' - ``` - - -## Backend: Manage user feedback collection - -If you don't want to collect user feedback in the frontend, you can instead create an endpoint in your backend and collect user feedback there. - -=== "Python" - - The phospho python package provides a `user_feedback` function to log user feedback. - - ```python - #ย See the previous section to get the task_id - task_id = ... - - phospho.user_feedback( - task_id=task_id, - flag="success", #ย or "failure" - source="user", - notes="Some notes (can be None)", #ย optional - ) - ``` - -=== "Javascript" - - The phospho javascript module provides a `userFeedback` function to log user feedback. - - ```javascript - const taskId = ... //ย See the previous section to get the task_id - - phospho.userFeedback({ - tastId: taskId, - flag: "success", // or "failure" - flagSource: "user", - notes: "Some notes (can be None)", - }); - ``` - -=== "API" - - You can use the API endpoint `tasks/{task-id}/flag` directly. - - ```bash - curl -X POST https://api.phospho.ai/v2/tasks/$TASK_ID/flag \ - -H "Authorization: Bearer $PHOSPHO_API_KEY" \ - -H "Content-Type: application/json" \ - -d '{ - "flag": "success", - "flag_source": "user" - "notes": "This is what the user said about this task" - }' - ``` diff --git a/phospho-mkdocs/docs/api-reference/introduction.md b/phospho-mkdocs/docs/api-reference/introduction.md deleted file mode 100644 index a3831df..0000000 --- a/phospho-mkdocs/docs/api-reference/introduction.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Getting started -description: Start logging your fist text message using phospho API ---- - -Most phospho features are available through the API. The base URL of the phospho API is `https://api.phospho.ai/v3`. - -If you do not want to use the API directly, we provide several SDKs to make it easier to integrate phospho into your products: - -- [Python SDK](/docs/integrations/python/logging) -- [JavaScript SDK](/docs/integrations/javascript/logging) -- [Langchain and Langsmith](/docs/integrations/langchain) -- [Langfuse](/docs/import-data/import-langfuse) -- [Supabase](/docs/integrations/supabase) - -The API full reference is available [here](https://api.phospho.ai/v3/redoc) - -## Dedicated endpoints - -Contact us at *contact@phospho.ai* to discuss integrating phospho into your products through dedicated endpoints, allowing seamless, behind-the-scenes functionality for your customers. diff --git a/phospho-mkdocs/docs/cli.md b/phospho-mkdocs/docs/cli.md deleted file mode 100644 index 96fd329..0000000 --- a/phospho-mkdocs/docs/cli.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: Command Line Interface -description: Use the Phospho CLI to interact with the Phospho API ---- - -Use the phospho CLI to run **offline tests.** - -# Installation - -The phospho CLI is a Python package. Install it with pip: - -```bash -pip install phospho -phospho --version # Check the installation -``` - -# Initialization - -Make sure you have installed the CLI and created a phospho account. - - -Login to the CLI with the `init` command: - -```bash -phospho init -``` - -This does two things: - - -1. It stores phospho credentials in your home directory: `~/.phospho/config`. Use the `config` command to see the stored credentials: - -```bash -phospho config -``` - -2. It creates a file `phospho_testing.py` in the current directory. You can [edit this file](/docs/python/testing) to customize your tests. - - -# Run the tests - -To run the tests in `phospho_testing.py`, use the `test` command: - -```bash -phospho test -``` - -Discover the results by following the link in the terminal output or by visiting the [phospho platform](https://platform.phospho.ai). - -# Add tests and customize tests - -Tests are written in Python. Edit the `phospho_testing.py` file to add your tests. - -
- -- :material-language-python:{ .lg .middle } __phospho testing module__ - - --- - - Learn how to edit phospho tests - - [:octicons-arrow-right-24: Read more](#) - -
diff --git a/phospho-mkdocs/docs/examples/introduction.md b/phospho-mkdocs/docs/examples/introduction.md deleted file mode 100644 index d9408fa..0000000 --- a/phospho-mkdocs/docs/examples/introduction.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: Phospho examples -description: Implement logging and discover phospho with these examples ---- - -## Python - -### Logging - -Set up phospho logging in your app. Run `pip install -U phospho` and get a minimal, powerful logging system. - -
- -- :material-language-python:{ .lg .middle } __OpenAI agent__ - - --- - - A simple generic assistant in the CLI, logging to phospho. - - [:octicons-arrow-right-24: View Example](/docs/integrations/python/examples/openai-agent) - -- :material-crown:{ .lg .middle } __OpenAI agent + Streamlit__ - - --- - - An OpenAI assistant in Streamlit, logging to phospho. - - [:octicons-arrow-right-24: View Example](/docs/integrations/python/examples/openai-streamlit) - -- :material-crown:{ .lg .middle } __url2chat: Chat with any website__ - - --- - - Streamlit webapp logging messages and user feedback to phospho. - - [:octicons-arrow-right-24: View Example](https://github.com/phospho-app/url2chat) - -- :material-bird:{ .lg .middle } __Langchain Python__ - - --- - - A simple Langchain retrieval chain, logging to phospho. - - [:octicons-arrow-right-24: View Example](/docs/integrations/langchain) - -
- -### Lab - -Run these examples locally to discover phospho. Just `pip install -U "phospho[lab]"` and get hacking. - -
- -- :fontawesome-solid-vial:{ .lg .middle } __Quickstart__ - - --- - - How to run the Event detection pipeline on a dataset and optimize the pipeline. - - [:octicons-arrow-right-24: View Example](https://github.com/phospho-app/phospho/blob/dev/examples/lab/quicksart.ipynb) - -- :fontawesome-solid-vial:{ .lg .middle } __Create a Custom Job__ - - --- - - How to create a custom job and run it with phospho lab. - - [:octicons-arrow-right-24: View Example](https://github.com/phospho-app/phospho/blob/dev/examples/lab/custom-job.ipynb) - -- :fontawesome-solid-vial:{ .lg .middle } __Parallel calls to OpenAI on a dataset__ - - --- - - How to run parallel calls to OpenAI on a dataset with parallelization, while respecting rate limits. - - [:octicons-arrow-right-24: View Example](https://github.com/phospho-app/phospho/blob/dev/examples/lab/parallel-calls.ipynb) - -
- -### More Python! - -
- -- :material-github:{ .lg .middle } __More Python examples__ - - --- - - Check out the examples on GitHub! - - [:octicons-arrow-right-24: View Examples](https://github.com/phospho-app/phospho/tree/dev/examples) - -
- -## JavaScript - -Run `npm i phospho` and get a minimal, powerful logging system. - -
- -- :material-github:{ .lg .middle } __Discover JavaScript examples__ - - --- - - Check out the examples on GitHub! - - [:octicons-arrow-right-24: View Examples](https://github.com/phospho-app/phosphojs/tree/main/examples) - -
- -## Other - -
- -- :material-bolt:{ .lg .middle } __Supabase__ - - --- - - Implement phospho logging in your Supabase app. - - [:octicons-arrow-right-24: View Example](/docs/integrations/supabase) - -- :material-discord:{ .lg .middle } __Can't find your example?__ - - --- - - Tell us what you need on Discord! - - [:octicons-arrow-right-24: Join Us](https://discord.gg/m8wzBGQA55) - -
diff --git a/phospho-mkdocs/docs/favicon.svg b/phospho-mkdocs/docs/favicon.svg deleted file mode 100644 index 8116cfa..0000000 --- a/phospho-mkdocs/docs/favicon.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/phospho-mkdocs/docs/getting-started.md b/phospho-mkdocs/docs/getting-started.md deleted file mode 100644 index 9fcaa52..0000000 --- a/phospho-mkdocs/docs/getting-started.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: "Getting started" -description: "Clusterize your text messages in 5 minutes. No code required." ---- - -This guide will help you get started with the hosted version of [phospho](https://platform.phospho.ai). - -1. **Create an account if needed**. Go to [phospho.ai](https://platform.phospho.ai). This is free. -2. **Import your first messages**. Upload a csv file with text messages (or log to the API). -3. **Run your first clustering**. Discover the results in your dashboard. - -## 1. Signing up - -Go to the [phospho platform](https://platform.phospho.ai/). Login or create an account if you don't have one. - -We recommend you use your company email address to create an account. This will let you easily invite your team members to collaborate on the same project. - -## 2. Import your first messages - -There are [several ways](./import-data/import-file.md) to import your data to phospho. The easiest is to upload a file. Let's see how to do it. - -### Format your file - -Format your `.csv` or `.xlsx` file to have the following columns: - -- `input` : the input text data, ususally the user message -- `output` : the output text, ususally the LLM app response - -Additonally, you can add the following columns: - -- `task_id`: an id of the task (input/output couple) -- `session_id`: an id of the session. Messages with the same session_id will be grouped together -- `user_id`: the id of the user that sent the message -- `created_at`: the creation date of the task (format it like `"2021-09-01 12:00:00"`) -- other columns will be stored as _metadata_ and can be used for filtering - -The maximum upload size with this method is 500MB. - -### Upload the file to the plateform - -Click the setting icon at the top right of the screen and select `Import data`. - -![Import data](./images/import/import_data.png) - - -Then, click the **Upload dataset** button and use **Choose file** button to select your file. After selecting the file, click on the **Send file** button. - -Your file will be uploaded and processed in a few minutes, depending on the size of the file. - -## 3. Run your first clustering - -!!! note - - You need to have a **payment method** set up to run a clustering. Add a - payment method in Settings and claim your free credits. - - -Now that you imported data, you can run your first clustering. - -Go to the **Clusters** page by clicking on **Clusters** on the sidebar. - -On top, click on the `Configure clusters detection`button. Change the parameters if needed. Finally, click on the `Run cluster analysis` button. - -The clustering will take some time to run. When it's finished, you'll see the results on the page. - -![Clustering](./images/clustering-demo.gif) - -Deep dive into the clusters or try different parameters to get different results. - -### Tips - -!!! info - - Running a clustering costs 2 credits per message. - -- Click on a cluster to see the messages inside. -- Click on the pickaxe icon to breakdown a cluster into smaller clusters. -- Try different parameters (filters, scope, user query) to get different results. - -Learn more about clustering [here.](/docs/analytics/clustering) - -## Tired of uploading files? Setup the API - -Learn more about how to [log to phospho](/docs/import-data/api-integration) in your app in a few minutes. - -## Next steps - -
- -- :material-tag-multiple:{ .lg .middle } __Automatic tagging__ - - --- - - Automatically annotate your text data and be alerted. **Take action.** - - [:octicons-arrow-right-24: Tagging](/docs/analytics/tagging) - -- :material-cluster:{ .lg .middle } __Unsupervised clustering__ - - --- - - Group users' messages based on their intention. **Find out what your users are talking about.** - - [:octicons-arrow-right-24: Clustering](/docs/analytics/clustering) - -- :material-test-tube:{ .lg .middle } __AB Testing__ - - --- - - Run experiments and iterate on your LLM app, while keeping track of performances. **Keep shipping.** - - [:octicons-arrow-right-24: AB Testing](/docs/analytics/ab-testing) - -- :material-cog-sync:{ .lg .middle } __Flexible evaluation pipeline__ - - --- - - Discover how to run and design a text analytics pipeline using natural language. **No code needed.** - - [:octicons-arrow-right-24: Evaluation pipeline](/docs/analytics/events) - -- :material-account-details:{ .lg .middle } __User analytics__ - - --- - - Detect user languages, sentiment, and more. **Get to know power users.** - - [:octicons-arrow-right-24: User analytics](/docs/analytics/language) - -
- diff --git a/phospho-mkdocs/docs/guides/LLM-judge.md b/phospho-mkdocs/docs/guides/LLM-judge.md deleted file mode 100644 index f2aa4dd..0000000 --- a/phospho-mkdocs/docs/guides/LLM-judge.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -title: LLM as a Judge -description: "Learn how to setup LLM as a judge on the phospho platform" ---- - -The [phospho platform](https://phospho.ai/) allows you to run events on your logs to score and analyze certain aspects of your data. - -One of the ways to setup such events is through **LLM as a judge techniques**. - -These can help you detect fraudulent inputs, angry customers, and more largely, to monitor your LLM apps. - -## Walkthrough - -The platform allows you to create events that leverage LLM as a judge techniques, let's see how you can achieve this in 3 simple steps. - -### 1. Head to the Events tab - -Starting from the platform, head to the **Events** tab on the top/left side of the screen. - -This takes us to a panel of all the events that have been created. - -The top part of the screen shows us the **top performing events**, while the bottom part shows us all the events that have been created. - -### 2. Add an event - -To set up LLM as a judge, let's click on the **Add Event** button on the right side of the screen. - -![Events Page](../images/guides/LLM_judge/events_page.png) - - -### 3. Create your LLM as a judge based event - -A panel opens up with information to configure, it should look something like this. - -![Add Event](../images/guides/LLM_judge/add_event.png) - -Let's imagine we want flag a conversation whenever our LLM app is unable to answer a user's question. - -Let's configure an event to detect this. - -Events are configured through these fields: - -- **Event name**: The name of the event, this can be anything you like, ie: _"LLM unable to answer"_ -- **Description**: A description of the event, explain what the event is about, in natural language, refer to the user as "the user" and to the LLM as "the assistant" ie: _"The assistant is unable to answer the user's question"_ - -Then press **Add Event** to save your event. - -All future logs will be analyzed for this event. If it is detected, we will flag the log. - - -!!! info - You can also setup more advanced events by changing these parameters: - - - **Detection scope**: This is the range of messages that the LLM should look at, it can be the task (User/Assistant exchange), the session(the whole conversation), the task input only (User message), or the task output only (Assistant message). - - **Engine**: By default events are setup with LLM as a judge techniques but you can also match regexes, and keywords. We are working hard to expend this list. - - **Output Type**: The type of output you want to get from the event, you can choose between **Boolean** (event is either present or not) and **Score** (we score the likelyhood of this event from 0 to 5). - - **Webook**: To leverage this event and connect it to other services, you can set up a Webhook URL which will be called when the event is detected. - - -## Example events to setup - -With this technique, you can setup a wide range of events, here are some examples: - -**Penetration testing and fraud detection**: The user is trying to jailbreak the assistant - -**Human interaction request**: The user is asking for a human to take over the conversation - -**LLM unable to answer**: The assistant is unable to answer the user's question - -**Code request**: The user is asking the assistant to write or review code - -**Information request**: The user is asking the assistant for more specific information - -## Next steps - -
- -- :material-target:{ .lg .middle } __Figure out User Intentions__ - - --- - - Figure out what your users are talking about. **See through the fog** - - [:octicons-arrow-right-24: Read more](#) - -- :material-chart-pie:{ .lg .middle } __Understand your data__ - - --- - - Get insights on your data through visualization, clustering and more. **Quick and easy** - - [:octicons-arrow-right-24: Read more](#) - -
diff --git a/phospho-mkdocs/docs/guides/export-dataset-argilla.md b/phospho-mkdocs/docs/guides/export-dataset-argilla.md deleted file mode 100644 index c69ac6e..0000000 --- a/phospho-mkdocs/docs/guides/export-dataset-argilla.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Export a dataset to Argilla -description: "You can generate a dataset from your project and export it to Argilla in one click." ---- - -Argilla is an open-source data labelling platform. Learn more about it on the [Argilla website](https://argilla.io/). - -!!! note - Contact us to get access to this feature. We can setup Argilla for you or - connect your existing Argilla instance. - -## Exporting a dataset to Argilla - -To export a dataset to Argilla, go to the Integrations tab of the platform. In the Argilla section, click on the **Export a new dataset** button. - -## Viewing and labelling the dataset in Argilla - -Once the dataset is exported, you can view and label it in Argilla. Just click the **View your datasets** button in the Argilla section of the Integrations tab. -You will need to log in with the username and password we provided you. - -To learn more on how to label a dataset in Argilla, refer to the [Argilla documentation](https://docs.argilla.io/en/latest/practical_guides/annotate_dataset.html). diff --git a/phospho-mkdocs/docs/guides/getting-started.md b/phospho-mkdocs/docs/guides/getting-started.md deleted file mode 100644 index cdf38fc..0000000 --- a/phospho-mkdocs/docs/guides/getting-started.md +++ /dev/null @@ -1,196 +0,0 @@ ---- -title: "Run analytics on your LLM app data" -description: "Discover how to run analytics on your LLM app data in 5 minutes" ---- - -This guide will help you get started with the [phospho platform](https://platform.phospho.ai). - -1. Create an account on [phospho.ai](https://platform.phospho.ai). -2. Import your data - - From [CSV/Excel](#import-from-file) - - From [LangSmith](#import-from-langsmith) - - From [LangFuse](#import-from-langfuse) - - From [API](/docs/getting-started) -3. Setup events, and get insights on your dashboard - Analyze user interactions, explore the analytics and improve - your app. - - -## 1. Create an account and login to phospho - -Go to the [phospho platform](https://platform.phospho.ai/). Login or create an account if you don't have one. - -!!! info - If this is your first time using phospho, a Default project has been created for you. - - -## 2. Import your data in a project - -### Import your data - -In the header, to the right, look for the gears icon and click on it. - -You can then click on import data. - -![Click on settings](../images/guides/getting_started/settings.png) - -You now have different options, the easiest options are to import from a CSV/Excel file, to synchronise with LangSmith or LangFuse if you have existing data there. - -For other options, you can take a look at the [technical documentation](/docs/getting-started) to import data directly from your system. - -![Import data](../images/guides/getting_started/import_data.png) - -#### Import from file - -You can import a CSV or a Excel file into phospho. - -??? info "How to import a CSV to phospho" - - Head over to the [phospho](https://platform.phospho.ai) platform and click on the settings icon at the top right of the screen. Then select `Import data`. - - ![Import data](../images/import/import_data.png) - - Click on the **Upload dataset** button. - - ![Import from CSV](../images/import/start_sending_data.png) - - You can now drag and drop your file or click on the box to select it. - - You should have the following columns in your file: - - ```csv - input;output;task_id;session_id;created_at - "Hello! This is what the user asked to the system";"This is the response showed to the user by the app.";"task_1";"session_1";"2024-05-31 12:31:22" - ``` - - !!! info - - Make sure that your CSV is a valid CSV file separated by a colon or a semicolon. - - Make sure the created_at field is in the format `YYYY-MM-DD HH:MM:SS`. - - -#### Import from LangSmith - -You can import existing data from [LangSmith](https://smith.langchain.com) by providing the LangSmith API key and your LangSmith project name. - -We will periodically fetch your data from LangSmith and import it into phospho. - -??? info "How to connect LangSmith to phospho" - - In your [langsmith](https://cloud.langsmith.com/) account, head to the settings page in the bottom left. - - You will reach the API Keys page where you can create a new API key in the top right corner. - - ![langsmith api key](../images/import/api_key_langsmith.png) - - Create a new API key and copy it. - - You can now head to the [phospho](https://platform.phospho.ai) platform. - - Click the settings icon at the top right of the screen and select `Import data`. - - ![title](../images/import/import_data.png) - - Then click, the **Import from langsmith** button. - - You can now copy your API key in the input field and enter the name of your langsmith project to copy. - - !!! note - This data is encrypted and stored securely. We need it to periodically fetch your data from LangSmith and import it into phospho. - - ![title](../images/import/start_sending_data.png) - - Your data will be synced to your project in a minute. - - -#### Import from LangFuse - -You can import existing data from [LangFuse](https://cloud.langfuse.com/) by providing your LangFuse Public and Secret keys. - -We will periodically fetch your data from LangFuse and import it into phospho. - -??? info "How to connect LangFuse to phospho" - - Head to your [langfuse](https://cloud.langfuse.com/) account, and go to the settings page, in the bottom left. - - You will reach the API Keys page where you can create a new API key. - - ![langfuse api key](../images/import/langfuse_api_keys.png) - - Click on Create new API keys, you will need both the secret key and the public key. - - You can now head to [phospho](https://platform.phospho.ai). - - Click the settings icon at the top right of the screen and select `Import data`. - - ![Click on the settings icon](../images/import/import_data.png) - - Then click, the **Import from langfuse** button. - - You can now copy your Secret Key and your Public Key in the input fields. - - !!! note - This data is encrypted and stored securely. We need it to periodically fetch your data from LangFuse and import it into phospho. - - - ![Import from LangFuse](../images/import/start_sending_data.png) - - Your data will be synced to your project in a minute. - - -## 3. Define and run events in the past - -### Using events - -Events are the key to getting insights from your data. - -An event is a specific interaction between a user and the system you want to track. - -To define an event, go to the **Events** tab in the phospho platform and click on the **Add Event** button. - -![Add Event](../images/guides/getting_started/add_event.png) - -In this tab you can setup events in natural language, in this image, we have setup an event to detect when the system is unable to answer the user's question. - -#### Run events in the past - -Once you've defined your events, you can run them on past data. - -Click on the Detect events button in the **Events** tab to run an event on your data. - -![Detect events](../images/guides/getting_started/detect_events.png) - -## 4. That's it, you're all set ! - -You can now [understand your data](/docs/guides/understand-your-data), analyze it, and get insights on your dashboard. - -## Next steps - -Learn to use the phospho platform with our guides: - -
- -- :material-gavel:{ .lg .middle } __LLM as a judge__ - - --- - - Setup LLM as a judge in your application. **Detect events** in your data. - - [:octicons-arrow-right-24: Read more](./LLM-judge.md) - -- :material-target:{ .lg .middle } __User Intentions__ - - --- - - Detect user intentions and **get a global overview** of your LLM app. - - [:octicons-arrow-right-24: Read more](./user-intent.md) - -- :material-chart-pie:{ .lg .middle } __Understand your data__ - - --- - - Get insights on your data through visualization, clustering and more. **Quick and easy** - - [:octicons-arrow-right-24: Read more](./understand-your-data.md) - -
diff --git a/phospho-mkdocs/docs/guides/understand-your-data.md b/phospho-mkdocs/docs/guides/understand-your-data.md deleted file mode 100644 index 88c3ea3..0000000 --- a/phospho-mkdocs/docs/guides/understand-your-data.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Understand your data -description: "Learn how to visualize, clusterize, and more." ---- - -## Get insights - -Now that you've imported your data and defined events, you can analyze your data in several ways. - -### Filtering - -Go to your [transcripts](https://platform.phospho.ai/org/transcripts/tasks) and filter your data to visualize interactions. - -![Filters](../images/guides/getting_started/filters.png) - -!!! info - You can combine filters to get more specific results. - -### Clustering - -Clustering is a powerful tool to group similar interactions together. - -We will automatically analyze your data to group similar interactions together. - -This gives you a better understanding of your data and what your users are talking about. - -![Clusters](../images/guides/getting_started/clusters.png) - -### Dataviz - -The **Dataviz** tab enables you to visualize your data in different ways. - -Plot any metric against any other metric and display them. - -!!! info - You can log and track any metric you want by adding them in the metadata field of your data. - -## More to come, let us know what you'd like to see! - -Contact us on our socials below or send us an email at [paul-louis@phospho.app](mailto:paul-louis@phospho.app). diff --git a/phospho-mkdocs/docs/guides/user-intent.md b/phospho-mkdocs/docs/guides/user-intent.md deleted file mode 100644 index 9adb744..0000000 --- a/phospho-mkdocs/docs/guides/user-intent.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: "Cluster User Intentions" -description: "Detect user intentions in your data" ---- - -The [phospho platform](https://phospho.ai/) allows you to explore user intentions through clustering techniques. - -This guide will show you how to achieve this. - - - -Make sure you have [imported your data](/docs/guides/getting-started) before starting this guide. - -## Walkthrough - -## 1. Go to the clustering tab - -Once on the platform, go to the **clustering** tab in the menu on the left of the screen. - -On here, phospho runs various algorithms to analyze your user interactions and detect patterns. - -We **group similar interactions** together to help you understand what your users are talking about. - -### 2. Run the clustering - -Click on **Run cluser detection** to start the process. - -![Clusters](../images/guides/user-intentions/clusters.png) - -!!! info - Clustering is not yet a continuous process, you will need to re-run it - manually to get the latest results. - -## How it works - -Phospho uses the phospho `intent-embed` model to represent user interactions in a high-dimensional space. Then, we use clustering techniques to group similar user messages together. -Finaly, we generate a summary of the clusters to help you understand what your users are talking about. - -## Next steps - -
- -- :material-gavel:{ .lg .middle } __LLM as a judge__ - - --- - - Leverage LLM as a judge techniques to analyze your LLM app's performance. **Quick and simple setup** - - [:octicons-arrow-right-24: Read more](#) - -- :material-chart-pie:{ .lg .middle } __Understand your data__ - - --- - - Get insights on your data through visualization, clustering and more. **Quick and easy** - - [:octicons-arrow-right-24: Read more](#) - -
diff --git a/phospho-mkdocs/docs/guides/welcome-guide.md b/phospho-mkdocs/docs/guides/welcome-guide.md deleted file mode 100644 index f312a15..0000000 --- a/phospho-mkdocs/docs/guides/welcome-guide.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: Welcome! -description: "Open Source Text Analytics Platform for LLM apps" ---- - -Welcome to the phospho platform guides. If you're unsure where to start, check out our [getting started guide](/docs/guides/getting-started). If you're looking for a deeper dive, you'll find everything you need below. - -Check out this video for a quick introduction to the platform. - - - -Monitor interactions between your LLM app and your users. Explore conversation topics and leverage real-time data. Get AI analytics and product-level insights to improve your LLM app. - -**Keywords:** _logging, automatic evaluations, experiments, A/B tests, user feedback, testing_ - -## Guides to get you started - -
- -- :material-rocket-launch:{ .lg .middle } __Get started__ - - --- - - Add text analytics in your LLM app in a blitz. **Quick and easy setup** - - [:octicons-arrow-right-24: Learn More](/docs/guides/getting-started) - -- :material-scale-balance:{ .lg .middle } __LLM as a judge__ - - --- - - Leverage LLM as a judge techniques to analyze your LLM app's performance. **Simple setup** - - [:octicons-arrow-right-24: Learn More](/docs/guides/LLM-judge) - -- :material-eye:{ .lg .middle } __Figure out User Intentions__ - - --- - - Figure out what your users are talking about. **See through the fog** - - [:octicons-arrow-right-24: Learn More](/docs/guides/user-intent) - -- :material-chart-box-outline:{ .lg .middle } __Understand your data__ - - --- - - Get insights on your data through visualization, clustering, and more. **Insights and analytics** - - [:octicons-arrow-right-24: Learn More](/docs/guides/understand-your-data) - -
- -Eager to see it in action? [Get started](/docs/guides/getting-started) in minutes. diff --git a/phospho-mkdocs/docs/import-data/api-integration.md b/phospho-mkdocs/docs/import-data/api-integration.md deleted file mode 100644 index 00f7e25..0000000 --- a/phospho-mkdocs/docs/import-data/api-integration.md +++ /dev/null @@ -1,275 +0,0 @@ ---- -title: "Setup logging in your app" -description: "Log text messages to phospho in real time" ---- - -You can setup the logging to phospho in your app in a few minutes. - -## 1. Get your phospho API key and your project id - -Go to the [phospho platform](https://platform.phospho.ai/). Login or create an account if you don't have one. - -If this is your first time using phospho, a Default project has been created for you. On the main page, note down the **project id** and follow the link to create a new **API key**. - -If you already have a project, go to Settings. Your project id is displayed on the top of the page. To create an API key, click on the _Manage Organization & API keys_ button. Store your **API key** safely! - -## 2. Setup phospho logging in your app - -### Add environment variables - -In your code, add the following environment variables: - -```bash -export PHOSPHO_API_KEY="your_api_key" -export PHOSPHO_PROJECT_ID="your_project_id" -``` - -### Log to phospho - -The basic abstraction of phospho is the **task**. If you're a programmer, you can think of tasks like a function. - -- `input (str)`: The text that goes into the system. _Eg: the user message._ -- `output (Optional[str])`: The text that comes out of the system. _Eg: the system response._ - -We prefer to use this abstraction because of its flexibility. You can log any text to a task, not just chat messages: _call to an LLM, answering a question, searching in documents, summarizing a text, performing inference of a model, steps of a chain-of-thought..._ - -Tasks can be grouped into **sessions**. Tasks and Sessions can be attached to **users**. - -### How to setup logging? - - -=== "Python" - - The phospho [Python module](https://pypi.org/project/phospho/) in the easiest way to log to phospho. It is compatible with Python 3.9+. - - ```bash - pip install --upgrade phospho - ``` - - To log tasks, use `phospho.log`. The logged tasks are analyzed by the phospho analytics pipeline. - - ```python - import phospho - - #ย By default, phospho reads the PHOSPHO_API_KEY and PHOSPHO_PROJECT_ID from the environment variables - phospho.init() - - # Example - input = "Hello! This is what the user asked to the system" - output = "This is the response showed to the user by the app." - - # This is how you log a task to phospho - phospho.log( - input=input, - output=output, - # Optional: for chats, group tasks together in sessions - #ย session_id = "session_1", - #ย Optional: attach tasks to users - #ย user_id = "user_1", - #ย Optional: add metadata to the task - #ย metadata = {"system_prompt": "You are a helpful assistant."}, - ) - ``` - -
- - - :material-language-python:{ .lg .middle } __More about logging in Python__ - - --- - - Did you know you could log OpenAI completions, streaming outputs and metadata? Learn more by clicking here. - - [:octicons-arrow-right-24: Read more](#) - -
- -=== "Javascript" - - The phospho [JavaScript module](https://www.npmjs.com/package/phospho) is the easiest way to log to phospho. It is compatible with Node.js. - - Types are available for your Typescript codebase. - - ```bash - npm i phospho - ``` - - To log tasks, use `phospho.log`. The logged tasks are analyzed by the phospho analytics pipeline. - - ```js - import { phospho } from "phospho"; - - //ย By default, phospho reads the PHOSPHO_API_ID and PHOSPHO_PROJECT_KEY from the environment variables - phospho.init(); - - // Example - const input = "Hello! This is what the user asked to the system"; - const output = "This is the response showed to the user by the app."; - - // This is how you log a task to phospho - phospho.log({ - input, - output, - // Optional: for chats, group tasks together in sessions - // session_id: "session_1", - // Optional: attach tasks to users - // user_id: "user_1", - // Optional: add metadata to the task - // metadata: { system_prompt: "You are a helpful assistant." }, - }); - ``` - -
- - - :material-language-javascript:{ .lg .middle } __More about logging in Javascript__ - - --- - - Did you know you could log OpenAI completions, streaming outputs and metadata? Learn more by clicking here. - - [:octicons-arrow-right-24: Read more](#) - -
- - -=== "API" - - You can directly log to phospho using [the /log endpoint](https://api.phospho.ai/v2/redoc#tag/Logs) of the API. - - ```bash - curl -X POST https://api.phospho.ai/v2/log/$PHOSPHO_PROJECT_ID \ - -H "Authorization: Bearer $PHOSPHO_API_KEY" \ - -H "Content-Type: application/json" \ - -d '{ - "batched_log_events": [ - { - "input": "your_input", - "output": "your_output", - "session_id": "session_1", - "user_id": "user_1", - "metadata": {"system_prompt": "You are a helpful assistant."}, - } - ] - }' - ``` - - !!! info - The `session_id`, `user_id` and `metadata` fields are **optional.** - -
- - - :material-webhook:{ .lg .middle } __API reference__ - - --- - - Create a tailored integration with the API. Learn more by clicking here. - - [:octicons-arrow-right-24: Read more](#) - -
- - -=== "Langchain" - - We provide a Langchain callback in our [Python module](https://pypi.org/project/phospho/). - - ```bash - pip install --upgrade phospho - ``` - - ```python - from phospho.integrations import PhosphoLangchainCallbackHandler - - chain = ... #ย Your Langchain agent or chain - - chain.invoke( - "Your chain input", - #ย Add the callback handler to the config - config={"callbacks": [PhosphoLangchainCallbackHandler()]}, - ) - ``` - -
- - - :material-bird:{ .lg .middle } __Langchain guide__ - - --- - - Customize what is logged to phospho by customizing the callback. Learn more by clicking here. - - [:octicons-arrow-right-24: Read more](#) - -
- - -=== "Supabase" - - Integrate phospho to your Supabase app is as simple as using the **phospho API**. - - !!! note - Follow the Supabase guide to leverage the power of product analytics in your - Supabase app! - - -
- - - :material-lightning-bolt:{ .lg .middle } __Read the supabase guide__ - - --- - - Get started with Supabase and phospho. Learn more by clicking here. - - [:octicons-arrow-right-24: Read more](#) - -
- - - -## 3. Get insights in the dashboard - -phospho run analytics pipelines on the messages logged. Discover the insights in the [phospho dashboard](https://phospho.ai/). - -## Next steps - -
- -- :material-tag:{ .lg .middle } __Automatic tagging__ - - --- - - Automatically annotate your text data and be alerted. **Take action.** - - [:octicons-arrow-right-24: Learn more](#) - -- :material-account-group:{ .lg .middle } __Unsupervised clustering__ - - --- - - Group users' messages based on their intention. **Find out what your users are talking about.** - - [:octicons-arrow-right-24: Learn more](#) - -- :material-ab-testing:{ .lg .middle } __AB Testing__ - - --- - - Run experiments and iterate on your LLM app, while keeping track of performances. **Keep shipping.** - - [:octicons-arrow-right-24: Learn more](#) - -- :material-valve:{ .lg .middle } __Flexible evaluation pipeline__ - - --- - - Discover how to run and design a text analytics pipeline using natural language. **No code needed.** - - [:octicons-arrow-right-24: Learn more](#) - -- :material-account-search:{ .lg .middle } __User analytics__ - - --- - - Detect user languages, sentiment, and more. **Get to know power users.** - - [:octicons-arrow-right-24: Learn more](#) - -
diff --git a/phospho-mkdocs/docs/import-data/import-file.md b/phospho-mkdocs/docs/import-data/import-file.md deleted file mode 100644 index 39c5577..0000000 --- a/phospho-mkdocs/docs/import-data/import-file.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Import a CSV or Excel file -description: You can upload your data to phospho by importing a CSV file or an Excel file ---- - -# Format your file - -Your CSV or Excel file need to have the following columns: - -- `input` : the input text data, ususally the user message -- `output` : the output text, ususally the LLM app response - -Additonally, you can add the following columns: - -- `task_id`: an id of the task (input/output couple) -- `session_id`: an id of the session. Messages with the same session_id will be grouped together in a single session -- `created_at`: the creation date of the task (format it like `"2021-09-01 12:00:00"`) - -The maximum upload size with this method is 500MB. - -# Upload your file to the plateform - -Click the setting icon at the top right of the screen and select `Import data`. - -![Import data](../images/import/import_data.png) - -Then click, the **Upload dataset** button and use **Choose file** button to select your file. - -![Choose file](../images/import/start_sending_data.png) - -Your tasks will be populated in your project in a minute. You might need to refresh the page to see them. - -# Next steps - -- [Run your first clustering](../analytics/clustering.md) -- [Run event detection](../analytics/events.md) \ No newline at end of file diff --git a/phospho-mkdocs/docs/import-data/import-langfuse.md b/phospho-mkdocs/docs/import-data/import-langfuse.md deleted file mode 100644 index f5694c0..0000000 --- a/phospho-mkdocs/docs/import-data/import-langfuse.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Import from Langfuse ๐Ÿชข -description: Sync your Langfuse data with phospho ---- - -# Go to Langfuse and head to settings - -Go to your [langfuse](https://cloud.langfuse.com/) account and head to the settings page, in the bottom left. - -You will reach the API Keys page where you can create a new API key. - -![langfuse api key](../images/import/langfuse_api_keys.png) - -Click on Create new API keys, you will need both the secret key and the public key. - -# Head to phospho and import your data - -Click the settings icon at the top right of the screen and select `Import data`. - -![Click the settings icon](../images/import/import_data.png) - -Then click, the **Import from langfuse** button. - -You can now copy your Secret Key and your Public Key in the input fields. - - - This data is encrypted and stored securely. We need it to periodically fetch - your data from LangFuse and import it into phospho. - - -![Import from langfuse](../images/import/start_sending_data.png) - -Your data will be synced to your project in a minute. - -# Next steps - -Default evaluators like language and sentiment will be run on messages. To create more events and to run them on your data, see the [event detection page](/docs/guides/events) diff --git a/phospho-mkdocs/docs/import-data/import-langsmith.md b/phospho-mkdocs/docs/import-data/import-langsmith.md deleted file mode 100644 index 4d7c256..0000000 --- a/phospho-mkdocs/docs/import-data/import-langsmith.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Import from Langsmith ๐Ÿฆœ๐Ÿ”— -description: Sync your Langsmith data with phospho ---- - -# Go to Langsmith and head to settings - -Go to your [langsmith](https://cloud.langsmith.com/) account and head to the settings page in the bottom left. - -You will reach the API Keys page where you can create a new API key in the top right corner. - -![langsmith api key](../images/import/api_key_langsmith.png) - -Create a new API key and copy it. - -# Head to phospho and import your data - -Click the settings icon at the top right of the screen and select `Import data`. - -![Click the settings icon](../images/import/import_data.png) - -Then click, the **Import from langsmith** button. - -You can now copy your API key in the input field and enter the name of your langsmith project to copy. - - - This data is encrypted and stored securely. We need it to periodically fetch - your data from LangSmith and import it into phospho. - - -![Import from langsmith](../images/import/start_sending_data.png) - -Data will be synced to your project in a minute. - -# Next steps - -Default evaluators like language and sentiment will be run on your data. To create more events and to run them on your data, see the [event detection page](/docs/guides/events) diff --git a/phospho-mkdocs/docs/import-data/tracing.md b/phospho-mkdocs/docs/import-data/tracing.md deleted file mode 100644 index 4f54a3b..0000000 --- a/phospho-mkdocs/docs/import-data/tracing.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -title: Log intermediate steps -description: Log all the intermediate steps of your LLM app pipeline ---- - -To help you debug and deep dive into your LLM apps logs, you can set up tracing using the `phospho` library. - -This traces every intermediate steps of your LLM app pipeline, from the input text to the output text. - -## Setup - -### Install phospho - -!!! info - This feature is currently only available for Python. NodeJS version coming soon! - - -Make sure you have the `phospho` module installed: - -```bash -pip install -U phospho -``` - -### Install OpenTelemetry instrumentations - -phospho leverages [OpenTelemetry instrumentations](https://opentelemetry.io/ecosystem/registry/) to trace your LLM app pipeline. To trace a library, you need to install the corresponding instrumentation. - -For example, here is how to trace OpenAI and Mistral API calls: - -```bash -#ย This will trace OpenAI API calls -pip install opentelemetry-instrumentation-openai -#ย This will trace MistralAI API calls -pip install opentelemetry-instrumentation-mistralai -``` - -Refer to this [list of available instrumentations](https://github.com/traceloop/openllmetry/tree/main/packages) to find the one that fits your needs. - -### Initialize phospho - -Initialize phospho with `phospho.init()` and enable tracing with `tracing=True`: - -```python -import phospho - -phospho.init(tracing=True) -``` - -## Automatic tracing - -All calls to the installed instrumentations are traced. - -For example, when you do `phospho.log`, the OpenAI API calls will be linked to this log. - -```python -import phospho - -phospho.init(tracing=True) - -#ย This is your LLM app code -openai_client = OpenAI() -color = openai_client.chat.completions.create( - messages=[{"role": "user", "content": "Say a color"}], - model="gpt-4o-mini" -) -animal = openai_client.chat.completions.create( - messages=[{"role": "user", "content": "Say an animal"}], - model="gpt-4o-mini", -) - -#ย This is how you log to phospho -#ย All the API calls made by the OpenAI client will me linked to this log -phospho.log( - input="Give me a color and an animal", - output=f"Color: {color}, Animal: {animal}", -) -``` - -You can view intermediate steps in the [Phospho dashboard](https://app.phospho.ai/) when reading a message transcript. - -In the automatic tracing mode, the link between API calls and logs is done using the timestamps. If you want more control, you can use the context tracing or manual tracing. - -## Context tracing - -To have more control over which instrumentations calls are linked to which logs, define a context using the `phospho.tracer()` context block or `@phospho.trace()` decorator syntax. - -### Context block - -This links all calls to the instrumentations made inside the context block to the phospho log. For example, this will link the OpenAI API call to the log: - -```python -with phospho.tracer(): - messages = [{"role": "user", "content": "Say good bye"}] - openai_client.chat.completions.create( - messages=messages, - model="gpt-4o-mini", - max_tokens=1, - ) - phospho.log(input="Say good bye", output=response) -``` - -To add `session_id`, `task_id` and `metadata`, pass them as arguments to the context block: - -```python -with phospho.tracer( - task_id="some_id", - session_id="my_session_id", - metadata={"user_id": "bob"} -): - messages = [{"role": "user", "content": "Say good bye"}] - openai_client.chat.completions.create( - messages=messages, - model="gpt-4o-mini", - max_tokens=1, - ) - phospho.log(input="Say good bye", output=response) -``` - -### Decorator syntax - -This works the same way as the context block. - -```python -@phospho.trace() -def my_function(): - messages = [{"role": "user", "content": "Say good bye"}] - openai_client.chat.completions.create( - messages=messages, - model="gpt-4o-mini", - max_tokens=1, - ) - phospho.log(input="Say good bye", output=response) - -my_function() -``` - -!!! note - The context is `phospho.tracer`, while the decorator is `phospho.trace`, without the `r`. - - -To add `session_id`, `task_id` and `metadata`, pass them as arguments to the decorator: - -```python -@phospho.trace( - task_id="some_id", - session_id="my_session_id", - metadata={"user_id": "bob"} -) -def my_function(): - messages = [{"role": "user", "content": "Say good bye"}] - openai_client.chat.completions.create( - messages=messages, - model="gpt-4o-mini", - max_tokens=1, - ) - phospho.log(input="Say good bye", output=response) -``` - -## Manual tracing - -Pass intermediate steps as a `steps` parameter to `phospho.log` to trace your pipeline: - -```python -phospho.log( - input="Give me a color and an animal", - output=f"Color: {color}, Animal: {animal}", - steps=[ - {"name": "OpenAI API call", "input": "Say a color", "output": color}, - {"name": "OpenAI API call", "input": "Say an animal", "output": animal}, - ] -) -``` - -This is useful to trace custom modules, which don't have an Opentelemetry instrumentation available. For example, document retrieval, data augmentation, etc. diff --git a/phospho-mkdocs/docs/index.md b/phospho-mkdocs/docs/index.md deleted file mode 100755 index 543c993..0000000 --- a/phospho-mkdocs/docs/index.md +++ /dev/null @@ -1,108 +0,0 @@ -# Welcome to the phospho platform documentation! - -*The phospho platform* is the open source text analytics platform for LLM apps. Understand your users and turn text into insights. - -- Cluster text messages to understand user intents and use cases -- Tag, score, and classify new messages -- Set up evaluations to get quantified scores -- A/B test your LLM app - -**Keywords:** _clustering, automatic evaluations, A/B tests, user analytics_ - - - -
- -- :material-play-circle:{ .lg .middle } __Get started now__ - - --- - - Clusterize your text messages in 5 minutes. No code required. - - [:octicons-arrow-right-24: Getting started](/docs/getting-started) - -
- -## How does it work? - -1. **Import data** - Import messages to phospho (e.g., _what the user asked, what the assistant answered_). - -2. **Run analysis** - Cluster messages and run analysis on the messages. No code required. - -3. **Explore results** - Visualize results on the phospho dashboard and export analytics results with integrations. - -
- -- :material-play-circle:{ .lg .middle } __Get started now__ - - --- - - Clusterize your text messages in 5 minutes. No code required. - - [:octicons-arrow-right-24: Getting started](/docs/getting-started) - -
- -## Key features - -
- -- :material-message-text:{ .lg .middle } __Cluster messages__ - - --- - - Group users' messages based on their intention. **Find out what your users are talking about.** - - [:octicons-arrow-right-24: Clustering](/docs/analytics/clustering) - -- :material-database-import:{ .lg .middle } __Import data__ - - --- - - Log all the important data of your LLM app. **Get started in minutes.** - - [:octicons-arrow-right-24: Importing data](/docs/getting-started) - -- :material-tag-multiple:{ .lg .middle } __Automatic tagging__ - - --- - - Automatically annotate your text data and be alerted. **Take action.** - - [:octicons-arrow-right-24: Tagging](/docs/analytics/tagging) - -- :material-test-tube:{ .lg .middle } __AB Testing__ - - --- - - Run experiments and iterate on your LLM app, while keeping track of performances. **Keep shipping.** - - [:octicons-arrow-right-24: AB Testing](/docs/analytics/ab-testing) - -- :material-cog-sync:{ .lg .middle } __Flexible evaluation pipeline__ - - --- - - Discover how to run and design a text analytics pipeline using natural language. **No code needed.** - - [:octicons-arrow-right-24: Evaluation pipeline](/docs/analytics/events) - -- :material-account-details:{ .lg .middle } __User analytics__ - - --- - - Detect user languages, sentiment, and more. **Get to know power users.** - - [:octicons-arrow-right-24: User analytics](/docs/analytics/language) - -
- -Eager to see it in action? [:octicons-arrow-right-24: Get started](/docs/getting-started) in minutes. diff --git a/phospho-mkdocs/docs/integrations/argilla.md b/phospho-mkdocs/docs/integrations/argilla.md deleted file mode 100644 index 77e745d..0000000 --- a/phospho-mkdocs/docs/integrations/argilla.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Export your data to Argilla -description: Annotate data with Argilla ---- - -!!! Info - This feature is in preview. Contact us if you would like to try it out! - -Argilla is a data annotation tool that allows you to label your data with ease. - -You can export your data to an Argilla dataset by clicking on the "Export" button in the integration tab. - diff --git a/phospho-mkdocs/docs/integrations/javascript/logging.md b/phospho-mkdocs/docs/integrations/javascript/logging.md deleted file mode 100644 index 5eae828..0000000 --- a/phospho-mkdocs/docs/integrations/javascript/logging.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -title: Log to phospho with Javascript -description: "Collect interactions and tasks" ---- - -## Log tasks to phospho - -**Tasks are the basic bricks that make up your LLM apps.** If you're a programmer, you can think of tasks like functions. - -A task is made of at least two things: - -- `input (string)`: What goes into a task. Eg: what the user asks to the assistant. -- `output (string?)`: What goes out of the task. Eg: what the assistant replied to the user. - -Example of tasks you can log to phospho: - -- Call to an LLM (input = query, output = llm response) -- Answering a question (input = question, output = answer) -- Searching in documents (input = search query, output = document) -- Summarizing a text (input = text, output = summary) -- Performing inference of a model (input = X, output = y) - -## Install the phospho module - -The phospho [JavaScript module](https://www.npmjs.com/package/phospho) is the easiest way to log to phospho. It is compatible with Node.js. - -Types are available for your Typescript codebase. - -```bash -npm i phospho -#ย with yarn -yarn add phospho -``` - -!!! info - The phospho module is an open source work in progress. [Your help is deeply - appreciated!](https://github.com/phospho-app/phosphojs) - -## Initialize phospho - -In your app, initialize the phospho module. By default, phospho will look for `PHOSPHO_API_KEY` and `PHOSPHO_PROJECT_ID` environment variables. - -!!! tip - Learn how to get your api key and project id by [clicking - here!](getting-started) - -```javascript -import { phospho } from "phospho"; - -phospho.init(); -``` - -You can also pass the `api_key` and `project_id` parameters to `phospho.init`. - -```javascript -// Initialize phospho -phospho.init({ apiKey: "api_key", projectId: "project_id" }); -``` - -## Log with phospho.log - -The most minimal way to log a task is to use `phospho.log`. - -### Logging text inputs and outputs - -```javascript -const question = "What's the capital of Fashion?"; - -const myAgent = (query) => { - // Here, you'd do complex stuff. - // But for this example we'll just return the same answer every time. - return "It's Paris of course."; -}; - -// Log events to phospho by passing strings directly -phospho.log({ - input: question, - output: myAgent(question), -}); -``` - -Note that the output is optional. If you don't pass an output, phospho will log `null`. - -### Logging OpenAI queries and responses - -phospho aims to be battery included. So if you pass something else than a `string` to `phospho.log`, phospho extracts what's usually considered "the input" or "the output". - -For example, if you use the OpenAI API: - -```javascript -// If you pass full OpenAI queries and results to phospho, it will extract the input and output for you. -const question = "What's the capital of Fashion?"; -const query = { - model: "gpt-3.5-turbo", - temperature: 0, - seed: 123, - messages: [ - { - role: "system", - content: - "You are an helpful frog who gives life advice to people. You say *ribbit* at the end of each sentence and make other frog noises in between. You answer shortly in less than 50 words.", - }, - { - role: "user", - content: question, - }, - ], - stream: false, -}; -const result = openai.chat.completions.create(query); -const loggedContent = await phospho.log({ input: query, output: result }); - -// Look at the fields "input" and "output" in the logged content -// Original fields are in "raw_input" and "raw_output" -console.log("The following content was logged to phospho:", loggedContent); -``` - -### Custom extractors - -Pass custom extractors to `phospho.log` to extract the input and output from any object. The original object will be converted to a dict (if jsonable) or a string and stored in `raw_input` and `raw_output`. - -```javascript -phospho.log({ - input: { custom_input: "this is a complex object" }, - output: { custom_output: "which is not a string nor a standard object" }, - // Custom extractors return a string - inputToStrFunction: (x) => x.custom_input, - outputToStrFunction: (x) => x.custom_output, -}); -``` - -## Logging additional metadata - -You can log additional data with each interaction (user id, version id,...) by passing arguments to `phospho.log`. - -```javascript -log = phospho.log({ - input: "log this", - output: "and that", - // There is a metadata field - metadata: { always: "moooore" }, - // Every extra keyword argument is logged as metadata - log_anything_and_everything: "even this is ok", -}); -``` - -## Streaming - -phospho supports streamed outputs. This is useful when you want to log the output of a streaming API. - -### Example with phospho.log - -Pass `stream: true` to `phospho.log` to handle streaming responses. When iterating over the response, phospho will automatically log each chunk until the iteration is completed. - -For example, you can pass streaming OpenAI responses to `phospho.log` the following way: - -```javascript -// This should also work with streaming -const question = "What's the capital of Fashion?"; -const query = { - model: "gpt-3.5-turbo", - temperature: 0, - seed: 123, - messages: [ - { - role: "system", - content: - "You are an helpful frog who gives life advice to people. You say *ribbit* at the end of each sentence and make other frog noises in between. You answer shortly in less than 50 words.", - }, - { - role: "user", - content: question, - }, - ], - stream: true, -}; -const streamedResult = await openai.chat.completions.create(query); - -phospho.log({ input: query, output: streamedResult, stream: true }); - -for await (const chunk of streamedResult) { - process.stdout.write(chunk.choices[0]?.delta?.content || ""); -} -``` diff --git a/phospho-mkdocs/docs/integrations/langchain.md b/phospho-mkdocs/docs/integrations/langchain.md deleted file mode 100644 index 66df46a..0000000 --- a/phospho-mkdocs/docs/integrations/langchain.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -title: Log to phospho in Python Langchain -description: Add AI analytics to your Langchain agent with phospho ---- - -phospho can be added to a Langchain agent as a callback handler. By default, the task input is the beginning of the chain, and the task output is the end result. Intermediate steps are also logged. - -```python -from phospho.integrations import PhosphoLangchainCallbackHandler - -chain = ... #ย Your Langchain agent or chain - -chain.invoke( - "Your chain input", - #ย Add the callback handler to the config - config={"callbacks": [PhosphoLangchainCallbackHandler()]}, -) -``` - -## Detailed setup in a retrieval agent - -### 1. Setup - -Set the following environment variables: - -``` -export PHOSPHO_API_KEY=... -export PHOSPHO_PROJECT_ID=... -export OPENAI_API_KEY=... -``` - -!!! tip - Learn how to get your project id and api key by [clicking - here!](getting-started) - -Install requirements: - -``` -pip install phospho openai langchain faiss-cpu -``` - -### 2. Add callback - -The phospho module implements the Langchain callback as well as other helpful tools to interact with phospho. Learn more in the [python doc.](/docs/integrations/python) - -!!! info - The phospho module is an open source work in progress. [Your help is deeply - appreciated!](https://github.com/phospho-app/phospho) - -For example, let's create a file called `main.py` with the agent code. - -phospho is integrated with langchain via the `PhosphoLangchainCallbackHandler` callback handler. This callback handler will log the input and output of the agent to phospho. - -```python - -from langchain.prompts import ChatPromptTemplate -from langchain_community.chat_models import ChatOpenAI -from langchain_community.embeddings import OpenAIEmbeddings -from langchain_community.vectorstores import FAISS -from langchain_core.output_parsers import StrOutputParser -from langchain_core.runnables import RunnablePassthrough - -vectorstore = FAISS.from_texts( - [ - "phospho is the LLM analytics platform", - "Paris is the capital of Fashion (sorry not sorry London)", - "The Concorde had a maximum cruising speed of 2,179 km (1,354 miles) per hour, or Mach 2.04 (more than twice the speed of sound), allowing the aircraft to reduce the flight time between London and New York to about three hours.", - ], - embedding=OpenAIEmbeddings(), -) -retriever = vectorstore.as_retriever() -template = """Answer the question based only on the following context: -{context} - -Question: {question} -""" -prompt = ChatPromptTemplate.from_template(template) -model = ChatOpenAI() - -retrieval_chain = ( - {"context": retriever, "question": RunnablePassthrough()} - | prompt - | model - | StrOutputParser() -) - - -# To integrate with Phospho, add the following callback handler - -from phospho.integrations import PhosphoLangchainCallbackHandler - - -while True: - text = input("Enter a question: ") - response = retrieval_chain.invoke( - text, - config={ - "callbacks": [PhosphoLangchainCallbackHandler()] - } - ) - print(response) - -``` - -The integration with phospho is done by adding the `PhosphoLangchainCallbackHandler` to the config of the chain. You can learn more about callbacks in the [langchain doc](https://python.langchain.com/docs/modules/callbacks/). - -###ย 3. Test - -Start the RAG agent and ask questions about the documents. - -```bash -python main.py -``` - -The agent answers question based on retrieved documents (RAG, Retrieval Augmented Generation). - -```text -Enter a question: What's the top speed of the Concorde? -The Concorde top speed is 2,179km per hour. -``` - -The conversation and the intermediate retrievals steps (such as the documents retrieved) are logged to phospho. - - -## Custom logging in langchain - -For more advanced manual logging with a langchain, you can inherit from the `PhosphoLangchainCallbackHandler` and add custom behaviour. - -The callback has a reference to the `phospho` object, which can be used to log custom data. - -```python -from phospho.integrations import PhosphoLangchainCallbackHandler - -class MyCustomLangchainCallbackHandler(PhosphoLangchainCallbackHandler): - - def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any: - """Run on agent end.""" - - #ย Do something custom here - self.phospho.log(input="...", output="...") - -``` - -You can refer to the [langchain doc](https://python.langchain.com/docs/modules/callbacks/) to have the full list of callbacks available. diff --git a/phospho-mkdocs/docs/integrations/postgresql.md b/phospho-mkdocs/docs/integrations/postgresql.md deleted file mode 100644 index 934385c..0000000 --- a/phospho-mkdocs/docs/integrations/postgresql.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Export your data to PostgreSQL -description: Export your data to a PostgreSQL database ---- - - -!!! info - This feature is in preview. Contact us if you would like to try it out! - - -You can export your data to a PostgreSQL database by clicking on the "Export" button in the integration tab. - -Your data will be synced every 24 hours. \ No newline at end of file diff --git a/phospho-mkdocs/docs/integrations/powerbi.md b/phospho-mkdocs/docs/integrations/powerbi.md deleted file mode 100644 index 79c97be..0000000 --- a/phospho-mkdocs/docs/integrations/powerbi.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Export your data to PowerBI -description: Export your data to create a PowerBI report ---- - - -!!! info - This feature is in preview. Contact us if you would like to try it out! - -You can export your data to PowerBI by clicking on the "Export" button in the integration tab. - -This will populate a SQL database with your data. You can then connect PowerBI to this database and create a report. - -Your data will be synced every 24 hours. \ No newline at end of file diff --git a/phospho-mkdocs/docs/integrations/python/analytics.md b/phospho-mkdocs/docs/integrations/python/analytics.md deleted file mode 100644 index 025560f..0000000 --- a/phospho-mkdocs/docs/integrations/python/analytics.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: Analyze your logs in Python -description: "Run custom analytics jobs in Python on your phospho logs" ---- - -Use the `phospho` Python package to run custom analytics jobs on your logs. - -## Setup - -Instal the package and set your API key and project ID as environment variables. - -```bash -pip install phospho pandas -export PHOSPHO_API_KEY=your_api_key -export PHOSPHO_PROJECT_ID=your_project_id -``` - -## Load logs as a DataFrame - -The best way to analyze your logs is to load them into a [pandas](https://pandas.pydata.org) DataFrame. This format is compatible with most analytics libraries. - -### One row = one (task, event) pair - -Phospho provides a `tasks_df` function to load the logs into a flattened DataFrame. Note that you need to have the `pandas` package installed to use this function. - -```python -import phospho - -phospho.init() -phospho.tasks_df(limit=1000) # Load the latest 1000 tasks -``` - -This will return a DataFrame where one row is one (task, event) pair. - -Example: - -| task_id | task_input | task_output | task_metadata | task_eval | task_eval_source | task_eval_at | task_created_at | session_id | session_length | event_name | event_created_at | -| -------------------------------- | ---------- | ----------- | -------------------------------------------------- | --------- | ---------------- | ------------------- | ------------------- | -------------------------------- | -------------- | --------------------------- | ------------------- | -| b58aacc6102f4a5e9d2364202ce23bf2 | Some input | Some output | \{'client_created_at': 1709925970, 'last_update... | success | owner | 2024-03-08 19:27:49 | 2024-03-09 15:09:31 | 71ee278ab2874666ae157c28a69c1679 | 2 | correction by user | 2024-03-08 19:27:43 | -| b58aacc6102f4a5e9d2364202ce23bf2 | Some input | Some output | \{'client_created_at': 1709925970, 'last_update... | success | owner | 2024-03-08 19:27:49 | 2024-03-09 15:09:31 | 71ee278ab2874666ae157c28a69c1679 | 2 | user frustration indication | 2024-03-08 19:27:43 | -| b58aacc6102f4a5e9d2364202ce23bf2 | Some input | Some output | \{'client_created_at': 1709925970, 'last_update... | success | owner | 2024-03-08 19:27:49 | 2024-03-09 15:09:31 | 71ee278ab2874666ae157c28a69c1679 | 2 | follow-up question | 2024-03-08 19:27:43 | - -This means that: - -- If a task has multiple events, there will be multiple rows with the same `task_id` and different `event_name`. -- If a task has no events, it will have one row with `event_name` as `None`. - -### One row = one task - -If you want one row to be one task, pass the parameter `with_events=False`. - -```python -phospho.tasks_df(limit=1000, with_events=False) -``` - -Result: - -| task_id | task_input | task_output | task_metadata | task_eval | task_eval_source | task_eval_at | task_created_at | session_id | session_length | -| -------------------------------- | ---------- | ----------- | ----------------------------- | --------- | ---------------- | ------------------- | ------------------- | -------------------------------- | -------------- | -| 21f3b21e8646402d930f1a02159e942f | Some input | Some output | \{'client_created_at':42f'... | failure | owner | 2024-03-08 19:53:59 | 2024-03-09 16:45:18 | a6b1b4224f874608b6037d41d582286a | 2 | -| 64382c6093b04a028a97a14131a4ab32 | Some input | Some output | \{'client_created_at':42f'... | success | owner | 2024-03-08 19:27:48 | 2024-03-09 15:51:07 | 9d13562051a84d6c806d4e6f6a58fb37 | 1 | -| b58aacc6102f4a5e9d2364202ce23bf2 | Some input | Some output | \{'client_created_at':42f'... | success | owner | 2024-03-08 19:27:49 | 2024-03-09 15:09:31 | 71ee278ab2874666ae157c28a69c1679 | 3 | - -### Ignore session features - -To ignore the sessions features, pass the parameter `with_sessions=False`. - -```python -phospho.tasks_df(limit=1000, with_sessions=False) -``` - -## Run custom analytics jobs - -To run custom analytics jobs, you can leverage all the power of the Python ecosystem. - -If you have a lot of complex ML models to run and LLM calls to make, consider the phospho lab that streamlines some of the work for you. - - -Set up the phospho lab to run custom analytics jobs on your logs - - -## Update logs from a DataFrame - -After running your analytics jobs, you might want to update the logs with the results. - -You can use the `push_tasks_df` function to push the updated data back to Phospho. This will override the specified fields in the logs. - -```python -# Fetch the 3 latest tasks -tasks_df = phospho.tasks_df(limit=3) -``` - -### Update columns - -Make changes to columns. **Not all columns are updatable.** This is to prevent accidental data loss. - -Here is the list of **updatable columns:** - -- `task_eval: Literal["success", "failure"]` -- `task_eval_source: str` -- `task_eval_at: datetime` -- `task_metadata: Dict[str, object]` (Note: this will override the whole metadata object, not just the specified keys) - - -If you need to update more fields, feel free to open an issue on the [GitHub repository](https://github.com/phospho-app/phospho/issues), submit a PR, or directly [reach out](mailto:contact@phospho.ai). - - -```python -#ย Make some changes -tasks_df["task_eval"] = "success" -tasks_df["task_metadata"] = tasks_df["task_metadata"].apply( - #ย To avoid overriding the whole metadata object, use **x to unpack the existing metadata - lambda x: {**x, "new_key": "new_value", "stuff": 44} -) -``` - -### Push updated data - -To push the updated data back to Phospho, use the `push_tasks_df` function. - -- You need to pass the `task_id` -- As a best practice, pass **only** the columns you want to update. - -```python -#ย Select only the columns you want to update -phospho.push_tasks_df(tasks_df[["task_id", "task_eval"]]) - -#ย To check that the data has been updated -phospho.tasks_df(limit=3) -``` - -You're all set. Your custom analytics are now also available in the Phospho UI. diff --git a/phospho-mkdocs/docs/integrations/python/examples/openai-agent.md b/phospho-mkdocs/docs/integrations/python/examples/openai-agent.md deleted file mode 100644 index a280566..0000000 --- a/phospho-mkdocs/docs/integrations/python/examples/openai-agent.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -title: OpenAI CLI agent ---- - -# OpenAI agent - -This is an example of a minimal OpenAI assistant in the console. Every interaction is logged to phospho. - -It demonstrates how to use `phospho.wrap()` with streaming content. - -## Installation - -``` -pip install --upgrade phospho openai -``` - -## Setup - -Create a `.env` file: -``` -PHOSPHO_PROJECT_ID=... -PHOSPHO_API_KEY=... -OPENAI_API_KEY=... -``` - -If you don't have a phospho API key and project ID, go to [Getting Started](/docs/getting-started) for the step by step instructions. - -## Implementation - -In `assistant.py`, add the following code: - -```python - -import phospho -import openai - -from dotenv import load_dotenv - -load_dotenv() - -phospho.init() -openai_client = openai.OpenAI() - -messages = [] - -print("Ask GPT anything (Ctrl+C to quit)", end="") - -while True: - prompt = input("\n>") - messages.append({"role": "user", "content": prompt}) - - query = { - "messages": messages, - "model": "gpt-3.5-turbo", - "stream": True, - } - response = openai_client.chat.completions.create(**query) - - phospho.log(input=query, output=response, stream=True) - - print("\nAssistant: ", end="") - for r in response: - text = r.choices[0].delta.content - if text is not None: - print(text, end="", flush=True) -``` - - -Launch the script and chat with the agent. - -``` -python assistant.py -``` - -Go to the phospho dashboard to monitor the interactions. \ No newline at end of file diff --git a/phospho-mkdocs/docs/integrations/python/examples/openai-streamlit.md b/phospho-mkdocs/docs/integrations/python/examples/openai-streamlit.md deleted file mode 100644 index a299ec2..0000000 --- a/phospho-mkdocs/docs/integrations/python/examples/openai-streamlit.md +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: OpenAI Streamlit agent ---- - -# Streamlit webapp with an OpenAI chatbot - -This is a demo Streamlit webapp that showcases a simple assistant agent whose response are logged to phospho. - -This demo shows how you can use phospho to log a complex stream of tokens. - -## Installation - -``` -pip install --upgrade phospho streamlit openai -``` - -## Setup - -Create a secrets file `examples/.streamlit/secrets.toml` with your OpenAI API key - -``` -PHOSPHO_PROJECT_ID=... -PHOSPHO_API_KEY=... -OPENAI_API_KEY="sk-..." # your actual key -``` - -## Script - -```python -import streamlit as st -import phospho -from openai import OpenAI -from openai.types.chat import ChatCompletionChunk -from openai._streaming import Stream - - -st.title("Assistant") # Let's do an LLM-powered assistant ! - -# Initialize phospho to collect logs -phospho.init( - api_key=st.secrets["PHOSPHO_API_KEY"], - project_id=st.secrets["PHOSPHO_PROJECT_ID"], -) - -# We will use OpenAI -client = OpenAI(api_key=st.secrets["OPENAI_API_KEY"]) - -# The messages between user and assistant are kept in the session_state (the browser's cache) -if "messages" not in st.session_state: - st.session_state.messages = [] - -# Initialize a session. A session is used to group interactions of a single chat -if "session_id" not in st.session_state: - st.session_state.session_id = phospho.new_session() - -# Messages are displayed the following way -for message in st.session_state.messages: - with st.chat_message(name=message["role"]): - st.markdown(message["content"]) - -# This is the user's textbox for chatting with the assistant -if prompt := st.chat_input("What is up?"): - # When the user sends a message... - new_message = {"role": "user", "content": prompt} - st.session_state.messages.append(new_message) - with st.chat_message("user"): - st.markdown(prompt) - - # ... the assistant replies - with st.chat_message("assistant"): - message_placeholder = st.empty() - full_str_response = "" - # We build a query to OpenAI - full_prompt = { - "model": "gpt-3.5-turbo", - # messages contains the whole chat history - "messages": [ - {"role": m["role"], "content": m["content"]} - for m in st.session_state.messages - ], - # stream asks to return a Stream object - "stream": True, - } - # The OpenAI module gives us back a stream object - streaming_response: Stream[ - ChatCompletionChunk - ] = client.chat.completions.create(**full_prompt) - - # ----> this is how you log to phospho - logged_content = phospho.log( - input=full_prompt, - output=streaming_response, - # We use the session_id to group all the logs of a single chat - session_id=st.session_state.session_id, - # Adapt the logging to streaming content - stream=True, - ) - - # When you iterate on the stream, you get a token for every response - for response in streaming_response: - full_str_response += response.choices[0].delta.content or "" - message_placeholder.markdown(full_str_response + "โ–Œ") - - # If you don't want to log every streaming chunk, log only the final output. - # phospho.log(input=full_prompt, output=full_str_response, metadata={"stuff": "other"}) - message_placeholder.markdown(full_str_response) - - st.session_state.messages.append( - {"role": "assistant", "content": full_str_response} - ) -``` - -Launch the webapp: - -``` -streamlit run webapp.py -``` - diff --git a/phospho-mkdocs/docs/integrations/python/logging.md b/phospho-mkdocs/docs/integrations/python/logging.md deleted file mode 100644 index edacc37..0000000 --- a/phospho-mkdocs/docs/integrations/python/logging.md +++ /dev/null @@ -1,369 +0,0 @@ ---- -title: Log to phospho with Python -description: "Collect interactions and tasks" ---- - -## Log tasks to phospho - -phospho is a text analytics tool. To send text, you need to **log tasks**. - -### What's a task in phospho? - -**Tasks are the basic bricks that make up your LLM apps.** If you're a programmer, you can think of tasks like functions. - -A task is made of at least two things: - -- `input (str)`: What goes into a task. Eg: what the user asks to the assistant. -- `output (Optional[str])`: What goes out of the task. Eg: what the assistant replied to the user. - -The Task abstraction helps you structure your app and quickly explain what it does to an outsider: "Here's what goes in, here's what goes out." - -It's the basic unit of text analytics. You can analyze the input and output of a task to understand the user's intent, the system's performance, or the quality of the response. - -### Examples of tasks - -- Call to an LLM (input = query, output = llm response) -- Answering a question (input = question, output = answer) -- Searching in documents (input = search query, output = document) -- Summarizing a text (input = text, output = summary) -- Performing inference of a model (input = X, output = y) - -## How to log a task? - -### Install phospho module - -The phospho [Python module](https://pypi.org/project/phospho/) in the easiest way to log to phospho. It is compatible with Python 3.9+. - -```bash -pip install --upgrade phospho -``` - - - The phospho module is open source. [Feel free to contribute!](https://github.com/phospho-app/phospho) - - -### Initialize phospho - -In your app, initialize the phospho module. By default, phospho will look for `PHOSPHO_API_KEY` and `PHOSPHO_PROJECT_ID` environment variables. - -!!! tip - Learn how to get your api key and project id by [clicking - here!](/docs/getting-started) - -```python -import phospho - -phospho.init() -``` - -You can also pass the `api_key` and `project_id` parameters to `phospho.init`. - -```python -phospho.init(api_key="phospho-key", project_id="phospho-project-id") -``` - -### Log with `phospho.log` - -To log messages to phospho, use `phospho.log`. This function logs a task to phospho. A task is a pair of input and output strings. The output is optional. - - -phospho is a text analytics tool. You can log any string input and output this way: - -```python -input_text = "Hello! This is what the user asked to the system" -output_text = "This is the response showed to the user by the app." - -# This is how you log a task to phospho -phospho.log(input=input_text, output=output_text) -``` - -The output is optional. - -The input and output logged to phospho are displayed in the dashboard and used to perform text analytics. - -## Common use cases - -### Log OpenAI queries and responses - -phospho aims to be battery included. So if you pass something else than a `str` to `phospho.log`, phospho extracts what's usually considered "the input" or "the output". - -For example, you can pass to `phospho.log` the same `input` as the arguments for `openai.chat.completions.create`. And you can pass to `phospho.log` the same `output` as OpenAI's `ChatCompletion` objects. - -```python -import openai -import phospho - -phospho.init() -openai_client = openai.OpenAI(api_key="openai-key") - -input_prompt = "Explain quantum computers in less than 20 words." - -# This is your LLM app code -query = { - "messages": [{"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": input_prompt}, - ], - "model": "gpt-4o-mini", -} -response = openai_client.chat.completions.create(**query) - -# You can directly pass as dict or a ChatCompletion as input and output -log = phospho.log(input=query, output=response) -print("input:", log["input"]) -print("output:", log["output"]) -``` - -```text -input: Explain quantum computers in less than 20 words. -output: Qubits harness quantum physics for faster, more powerful computation. -``` - -Note that the input is a dict. - -### Log a list of OpenAI messages - -In conversational apps, your conversation history is often a list of messages with a `role` and a `content`. This is because it's the format expected by OpenAI's chat API. - -You can directly log this messages list as an input or an output to `phospho.log`. The input, output, and system prompt are automatically extracted based on the messages' role. - -```python -#ย This is your conversation history in a chat app -messages = [ - {"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": "Explain quantum computers in less than 20 words."}, -] - -#ย Your LLM app code generates a response -response = openai_client.chat.completions.create( - messages=messages, - model="gpt-4o-mini", -) - -#ย You append the response to the conversation history -messages.append({"role": response.choices[0].role, "content": response.choices[0].message.content, } ) - -#ย You can log the conversation history as input or output -log = phospho.log(input=messages, output=messages) - -print("input:", log["input"]) -print("output:", log["output"]) -print("system_prompt:", log["system_prompt"]) # system prompt is automatically extracted -``` - -```text -input: Explain quantum computers in less than 20 words. -output: Qubits harness quantum physics for faster, more powerful computation. -system_prompt: You are a helpful assistant. -``` - -Note that consecutive messages with the same role are **concatenated** with a newline. - -```python -messages = [ - {"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": "Explain quantum computers in less than 20 words."}, - {"role": "user", "content": "What is the speed of light?"}, -] -log = phospho.log(input=messages) -``` - -```text -input: Explain quantum computers in less than 20 words.\nWhat is the speed of light? -``` - -If you need more control, consider using custom extractors. - -### Custom extractors - -Pass custom extractors to `phospho.log` to extract the input and output from any object. The custom extractor is a function that is applied to the input or output before logging. The function should return a string. - -The original object is converted to a dict (if jsonable) or a string, and stored in `raw_input` and `raw_output`. - -```python -phospho.log( - input={"custom_input": "this is a complex object"}, - output={"custom_output": "which is not a string nor a standard object"}, - #ย Custom extractors return a string - input_to_str_function=lambda x: x["custom_input"], - output_to_str_fucntion=lambda x: x["custom_output"], -) -``` - -```text -input: this is a complex object -output: which is not a string nor a standard object -``` - -## Log metadata - -You can log additional data with each interaction (user id, version id,...) by passing arguments to `phospho.log`. - -```python -log = phospho.log( - input="log this", - output="and that", - # There is a metadata field - metadata={"always": "moooore"}, - #ย Every extra keyword argument is logged as metadata - log_anything_and_everything="even this is ok", -) -``` - -## Log streaming outputs - -phospho supports streamed outputs. This is useful when you want to log the output of a streaming API. - -### Example: OpenAI streaming - -Out of the box, phospho supports streaming OpenAI completions. Pass `stream=True` to `phospho.log` to handle streaming responses. - -When iterating over the response, phospho will automatically concatenate each chunk until the streaming is finished. - -```python - -from openai.types.chat import ChatCompletionChunk -from openai._streaming import Stream - -query = { - "messages": [{"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": "Explain quantum computers in less than 20 words."}, - ], - "model": "gpt-4o-mini", - # Enable streaming on OpenAI - "stream": True -} -# OpenAI completion function return a Stream of chunks -response: Stream[ChatCompletionChunk] = openai_client.chat.completions.create(**query) - -#ย Pass stream=True to phospho.log to handle this -phospho.log(input=query, output=response, stream=True) -``` - -### Example: Local Ollama streaming - -Let's assume you're in a setup where you stream text from an API. The stream is a [generator](https://realpython.com/introduction-to-python-generators/) that yields chunks of the response. The generator is [immutable](https://realpython.com/python-mutable-vs-immutable-types/) by default. - -To use this as an `output` in `phospho.log`, you need to: - -1. Wrap the generator with `phospho.MutableGenerator` or `phospho.MutableAsyncGenerator` (for async generators) -2. Specify a `stop` function that returns `True` when the streaming is finished. This is used to trigger the logging of the task. - -Here is an example with an [Ollama endpoint](https://ollama.com) that streams responses. - -```python -r = requests.post( - #ย This is a local streaming Ollama endpoint - "http://localhost:11434/api/generate", - json={ - "model": "mistral-7b", - "prompt": "Explain quantum computers in less than 20 words.", - "context": [], - }, - # This connects to a streaming API endpoint - stream=True, -) -r.raise_for_status() -response_iterator = r.iter_lines() - -#ย response_iterator is a generator that streams the response token by token -#ย It is immutable by default -# In order to directly log this to phospho, we need to wrap it this way -response_iterator = phospho.MutableGenerator( - generator=response_iterator, - # Indicate when the streaming stops - stop=lambda line: json.loads(line).get("done", False), -) - -# Log the generated content to phospho with Stream=True -phospho.log(input=prompt, output=response_iterator, stream=True) - -#ย As you iterate over the response, phospho combines the chunks -#ย When stop(output) is True, the iteration is completed and the task is logged -for line in response_iterator: - print(line) -``` - -## Wrap functions with `phospho.wrap` - -If you wrap a function with `phospho.wrap`, phospho automatically logs a task when they are called: - -- The passed arguments are logged as `input` -- The returned value is logged as `output` - -You can still use [custom extractors](#custom-extractors) and log metadata. - -### Use the `@phospho.wrap` decorator - -If you want to log every call to a python function, you can use the `@phospho.wrap` decorator. This is a nice pythonic way to structure your LLM app's code. - -```python -@phospho.wrap -def answer(messages: List[Dict[str, str]]) -> Optional[str]: - response = openai_client.chat.completions.create( - model="gpt-4o-mini", - messages=messages, - ) - return response.choices[0].delta.content -``` - -### How to log metadata with phospho.wrap? - -Like phospho.log, every extra keyword argument is logged as metadata. - -```python -@phospho.wrap(metadata={"more": "details"}) -def answer(messages: List[Dict[str, str]]) -> Optional[str]: - response = openai_client.chat.completions.create( - model="gpt-4o-mini", - messages=messages, - ) - return response.choices[0].delta.content -``` - -### Wrap an imported function with phospho.wrap - -If you can't change the function definition, you can wrap it this way: - -```python -#ย You can wrap any function call in phospho.wrap -response = phospho.wrap( - openai_client.chat.completions.create, - #ย Pass additional metadata - metadata={"more": "details"}, -)( - messages=[ - {"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": "Explain quantum computers in less than 20 words."}, - ], - model="gpt-4o-mini", -) -``` - -If you want to wrap all calls to a function, override the function definition with the wrapped version: - -```python -openai_client.chat.completions.create = phospho.wrap( - openai_client.chat.completions.create -) -``` - -### Wrap a streaming function with phospho.wrap - -phospho.wrap can handle streaming functions. To do that, you need two things: - -1. Pass `stream=True`. This tells phospho to concatenate the string outputs. -2. Pass a `stop` function, such that `stop(output) is True` when the streaming is finished and trigger the logging of the task. - -```python -@phospho.wrap(stream=True, stop=lambda token: token is None) -def answer(messages: List[Dict[str, str]]) -> Generator[Optional[str], Any, None]: - streaming_response: Stream[ - ChatCompletionChunk - ] = openai_client.chat.completions.create( - model="gpt-4o-mini", - messages=messages, - stream=True, - ) - for response in streaming_response: - yield response.choices[0].delta.content -``` \ No newline at end of file diff --git a/phospho-mkdocs/docs/integrations/python/reference.md b/phospho-mkdocs/docs/integrations/python/reference.md deleted file mode 100644 index ccfe2d8..0000000 --- a/phospho-mkdocs/docs/integrations/python/reference.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Python module reference -description: "Full documentation for the phospho Python module" ---- - -
- -- :material-book-open:{ .lg .middle } __Full Python module reference__ - - --- - - Click here to get the doc for every function of the Python module. - - [Read the docs](https://phospho-app.github.io/phospho/ref/phospho/phospho.html) - -- :material-github:{ .lg .middle } __Source code__ - - --- - - Your contributions are welcome! - - [View on GitHub](https://github.com/phospho-app/phospho) - -- :material-language-python:{ .lg .middle } __Python module on PyPI__ - - --- - - pip install phospho - - [View on PyPI](https://pypi.org/project/phospho/) - -
diff --git a/phospho-mkdocs/docs/integrations/python/testing.md b/phospho-mkdocs/docs/integrations/python/testing.md deleted file mode 100644 index d90fd6a..0000000 --- a/phospho-mkdocs/docs/integrations/python/testing.md +++ /dev/null @@ -1,171 +0,0 @@ ---- -title: Testing with Python -description: "Test your agent before deploying it to production" ---- - -Evaluate your app's performance before deploying it to production. - -The phospho testing framework allows you to test your app with historical data, custom datasets, and custom tests. - -The phospho python module **parallelizes** the function calls to **speed up** the testing process. - -## Getting started - -To get started, install the phospho python module. - -```bash -pip install -U phospho -``` - -Create a new file `phospho_testing.py`: - -```python -import phospho - -phospho_test = phospho.PhosphoTest() -``` - -In this file, you can then write your tests. - - -## Backtesting - -To use data from the phospho platform, you can use the backtest source loader. - -```python -import phospho - -phospho_test = phospho.PhosphoTest() - -@phospho_test.test( - source_loader="backtest", # Load data from logged phospho data - source_loader_params={"sample_size": 3}, -) -def test_backtest(message: phospho.lab.Message) -> str | None: - client = phospho.lab.get_sync_client("mistral") - response = client.chat.completions.create( - model="mistral-small", - messages=[ - {"role": "system", "content": "You are an helpful assistant"}, - {"role": message.role, "content": message.content}, - ], - ) - return response.choices[0].message.content -``` - - -## Dataset .CSV, .XLSX, .JSON - -To test with a custom dataset, you can use the dataset source loader. - - -```python -import phospho - -phospho_test = phospho.PhosphoTest() - -@phospho_test.test( - source_loader="dataset", - source_loader_params={"path": "path/to/dataset.csv"}, -) -def test_backtest(column_a: str, column_b: str) -> str | None: - client = phospho.lab.get_sync_client("mistral") - response = client.chat.completions.create( - model="mistral-small", - messages=[ - {"role": "system", "content": "You are an helpful assistant"}, - {"role": "user", "content": column_a}, - ], - ) - return response.choices[0].message.content -``` - - -Supported file formats: `csv`, `xlsx`, `json` - -!!! info - The columns of the dataset file should match the function arguments. - -Example of a local csv file: - -```txt -column_a, column_b -"What's larger, 3.9 or 3.11?", "3.11" -``` - -## Custom tests - -To write custom tests, you can just create a function and decorate it with `@phospho_test.test()`. - -At the end, add `phospho.log` to send the data to phospho for analysis. - -```python -import phospho - -phospho_test = phospho.PhosphoTest() - -@phospho_test.test() -def test_simple(): - client = phospho.lab.get_sync_client("mistral") - response = client.chat.completions.create( - model="mistral-small", - messages=[ - {"role": "system", "content": "You are an helpful assistant"}, - {"role": "user", "content": "What's bigger: 3.11 or 3.9?"}, - ], - ) - response_text = response.choices[0].message.content - # Use phospho.log to send the data to phospho for analysis - phospho.log( - input="What's bigger: 3.11 or 3.9?", - output=response_text, - #ย Specify the version_id of the test - version_id=phospho_test.version_id, - ) -``` - -## Run using python - -To run the tests, use the `run` method of the `PhosphoTest` class. - -```python -phospho_test.run() -``` - -The `executor_type` can be either: -- `parallel` (default): parallelizes the backtest and dataset source loader calls. -- `parallel_jobs`: all functions are called in parallel. -- `sequential`: great for debugging. - -## Run using the phospho CLI - -You can also use the phospho command line interface to run the tests. In the folder where `phospho_testing.py` is located, run: - -```bash -phospho init # Run this only once -phospho test -``` - -The executor type can be specified with the `--executor-type` flag. - -```bash -phospho test --executor-type=parallel_jobs -``` - -Learn more using the `--help` flag: - -```bash -phospho test --help -``` - -
- -- :octicons-terminal-16:{ .lg .middle } __phospho CLI__ - - --- - - Learn how to install phospho command line interface - - [:octicons-arrow-right-24: Read more](#) - -
diff --git a/phospho-mkdocs/docs/integrations/supabase.md b/phospho-mkdocs/docs/integrations/supabase.md deleted file mode 100644 index 1754886..0000000 --- a/phospho-mkdocs/docs/integrations/supabase.md +++ /dev/null @@ -1,292 +0,0 @@ ---- -title: Log to phospho in a Supabase app with a webhook -description: Add AI analytics to your Supabase chatbot with phospho ---- - -phospho is a platform that helps you build better chatbots by providing AI analytics about the user experience of your chatbot. - -[Supabase](https://supabase.com/) is an open-source database, authentication system, and hosting platform that allows you to quickly and easily build powerful web-based applications. - -If you're using Supabase to build a chatbot, here's how you can log your chatbot messages to phospho using a Supabase Database webhook, a Supabase Edge Function, and the phospho API. - -## Prerequisites - -We assume in this guide that you have already set up a Supabase project. - -```bash -npm i supabase -supabase init -supabase login -``` - -We also assume that you have already created the chatbot UI using Supabase ([here's a template](https://github.com/mayooear/langchain-supabase-website-chatbot)). - -## Add the phospho API key and project id to your Supabase project - -[Create an account on phospho](https://app.phospho.ai/dashboard) and get your API key and project id from the Settings. - -Then, add the `PHOSPHO_API_KEY` and `PHOSPHO_PROJECT_ID` secrets to your Supabase project. - -### Option 1: In the CLI - -Add the phospho API key and project id to your `./supabase/.env` file: - -```bash .env -PHOSPHO_API_KEY="..." -PHOSPHO_PROJECT_ID="..." -``` - -Push those secrets to your Supabase project: - -```bash -supabase secrets set --env-file ./supabase/.env -``` - -### Option 2: In the console UI - -Add directly the phospho API key and project id as Edge Functions Secrets in the Supabase console. Go to Settings/Edge Functions, and create the `PHOSPHO_API_KEY` and `PHOSPHO_PROJECT_ID` secrets. - -![Edge functions secrets](../images/supabase/secrets_edge_functions.png) - -## Setup your chat_history table - -If you're using Supabase to build a chatbot, you probably already have a table that stores the chat history of your users. This table lets your users access their chat history on your app event after they close the website. - -If you don't, **you need to create a `chat_history` table.** - -Here's what your `chat_history` table should look like: - -| message_id | chat_id | user_message | assistant_response | metadata | -| ---------- | ------- | ------------ | -------------------------- | --------------------------- | -| c8902bda28 | 9bc8eda | Hi | Hello! How can I help you? | \{"model_name": "gpt-3.5"\} | - -Here are the columns of the table: - -- `message_id` (UUID), the unique id of the message. -- `chat_id` (UUID), the unique id of the chat. All the messages from the same conversation should have the same `chat_id`. -- `user_message` (TEXT), the message sent by the user. -- `assistant_response` (TEXT), is the response displayed to the user. It can be the direct generation of an LLM, or the result of a multistep generation. -- (Optional)` metadata` (JSON), a dictionary containing metadata about the message - -### Create the table - -In Supabase, create a new table called `chat_history` with the columns described above. Customize the table to match your app behaviour. - -Here's for example the SQL code to create the table with the columns described above: - -```sql -create table - public.chat_history ( - message_id uuid not null default gen_random_uuid (), - chat_id uuid not null default gen_random_uuid (), - user_message text not null, - assistant_response text null, - metadata json null, - constraint chat_history_pkey primary key (message_id) - ) tablespace pg_default; -``` - -### Update the table - -The table `chat_history` should be updated every time a new message is sent to your chatbot. - -Example of how to insert a new row in the chat_history table with Supabase: - -```javascript -// The first time a user sends a message, let the chat_id be generated automatically -const { firstMessage, error } = await supabase - .from('chat_history') - .insert({ - user_message: userMessage, // The message sent by the user - assistant_response: assistantResponse, // The response displayed to the user, eg LLM generation - metadata: metadata // Optional Object -}).select() - -// We get the chat_id of the first message -const chat_id = firstMessage.chat_id - -// The next time the user sends a message, we use the same chat_id -// This groups all the messages from the same conversation -const { error } = await supabase - .from('chat_history') - .insert({ - chat_id: chat_id, - user_message: userMessage, - assistant_response: assistantResponse, - metadata: metadata -}).select() -``` - -## Setup the Supabase Edge Function - -Let's create a [Supabase Edge Function](https://supabase.com/docs/guides/functions/quickstart) that will log the chat message to phospho using the [phospho API](/docs/api-reference). Later, we will trigger this function with a Supabase Database webhook. - -### Create the Edge Function - -Create a new Edge Function called phospho-logging inside your project: - -```bash -supabase functions new phospho-logging -``` - -This creates a function stub in your `supabase` folder: - -```bash -โ””โ”€โ”€ supabase - โ”œโ”€โ”€ functions - โ”‚ โ””โ”€โ”€ phospho-logging - โ”‚ โ”‚ โ””โ”€โ”€ index.ts ## Your function code - โ””โ”€โ”€ config.toml -``` - -### Write the code to call the phospho API - -In the newly created `index.ts` file, we add a basic code that: - -1. Gets the phospho API key and project id from the environment variables. -2. Converts the payload sent by Supabase to the format expected by the phospho API. -3. Sends the payload to the phospho API. - -Here's an example of what the code could look like: - -```javascript supabase/functions/phospho-logging/index.ts -// Get the phospho API key and project id from the environment variable -const phosphoApiKey = Deno.env.get("PHOSPHO_API_KEY"); -const phosphoProjectId = Deno.env.get("PHOSPHO_PROJECT_ID"); -const phosphoUrl = `https://api.phospho.ai/v2/log/${phosphoProjectId}`; - -// This interface describes the payload sent by Supabase to the Edge Function -// Change this to match your chat_history table -interface ChatHistoryPayload { - type: "INSERT" | "UPDATE" | "DELETE"; - table: string; - record: { - message_id: string; - chat_id: string; - user_message: string; - assistant_response: string; - metadata: { - model_name: string; - }; - }; -} - -Deno.serve( - async (req: { - json: () => ChatHistoryPayload | PromiseLike; - }) => { - if (!phosphoApiKey) { - throw new Error("Missing phospho API key"); - } - if (!phosphoProjectId) { - throw new Error("Missing phospho project id"); - } - - const payload: ChatHistoryPayload = await req.json(); - - // Here, we react to the INSERT and UPDATE events on the chat_history table - // Change this to match your chat_history table - if (payload.record.user_message && (payload.type === "UPDATE" || payload.type === "INSERT")) { - // Here, we convert the payload to the format expected by the phospho API - // Change this to match your chat_history table - const phosphoPayload = { - batched_log_events: [ - { - // Here's how to map the payload to the phospho API - task_id: payload.record.message_id, - session_id: payload.record.chat_id, - input: payload.record.user_message, - output: payload.record.assistant_response, - }, - ], - }; - - // Send the payload to the phospho API - const response = await fetch(phosphoUrl, { - method: "POST", - headers: { - Authorization: `Bearer ${phosphoApiKey}`, - "Content-Type": "application/json", - }, - body: JSON.stringify(phosphoPayload), - }); - - if (!response.ok) { - throw new Error( - `Error sending chat data to Phospho: ${response.statusText}` - ); - } - - return new Response(null, { status: 200 }); - } - - return new Response("No new chat message detected", { status: 200 }); - } -); -``` - -Feel free to change the code to adapt it to your `chat_history` table and to how you chat messages are stored. - -### Deploy the Edge Function - -Deploy the function to your Supabase project: - -```bash -supabase functions deploy phospho-logging --project-ref your_supabase_project_ref -``` - -Your Supabase project ref which can be found in your console url: `https://supabase.com/dashboard/project/project-ref` - -## Setup the Supabase Webhook - -Now that you have created the Supabase Edge Function, create a Supabase Database webhook to trigger it. - -### Create the webhook - -In the Supabase console, go to Database/Webhook. - -![Webhooks](../images/supabase/webhook_tab.png) - -Click on Create new in the top right. Make the webhook trigger on the `chat_history` table, and on the `INSERT` and `UPDATE` events. - -![Webhooks again](../images/supabase/create_webhook_1.png) - -### Call the Edge Function with authentication - -In the webhook configuration, select the type of webhook "Supabase Edge Function" and select the `phospho-logging` you just deployed. - -In the HTTP Headers section, add an `Authorization` header with the value `Bearer ${SUPABSE_PROJECT_ANON_PUBLIC_KEY}`. Find your anon public key in the console, in the tabs Settings/API/Project API keys. - -![](../images/supabase/create_webhook_2.png) - -### Test the webhook - -To test the webhook, insert a row in the `chat_history` table, and the webhook should be triggered. You'll see the logs in the phospho dashboard. - -You can also send a message to your chatbot. This will now trigger the webhook and log the message to phospho. - -## Next steps - -You're done! Your are now logging the chatbot messages to phospho and can learn how the users interact with your chatbot using the phospho dashboard and AI analytics. - -Learn more about phospho features by reading the [guides](/docs/guides): - -
- -- :material-comment-text:{ .lg .middle } __Log user feedback__ - - --- - - Log user feedback to phospho to improve the phospho evaluation - - [:octicons-arrow-right-24: Read more](#) - -- :material-tune:{ .lg .middle } __Run AB Tests__ - - --- - - Try different versions of your chatbot and compare outcomes on phospho - - [:octicons-arrow-right-24: Read more](#) - -
diff --git a/phospho-mkdocs/docs/local/custom-job.md b/phospho-mkdocs/docs/local/custom-job.md deleted file mode 100644 index 3ba8a18..0000000 --- a/phospho-mkdocs/docs/local/custom-job.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: Create Custom Jobs -description: Create custom jobs and run them on your messages with phospho ---- - -phospho comes with several built-in jobs that you can use to process your messages: zero-shot evaluation, classification based evaluation, event detection... - -But you can also create your own jobs and run them on your messages. This is what we call a custom job. - -## Creating a custom job function - -To create a custom job function, you need to create a function that: - -- takes a `lab.Message` as input -- can take additional parameters if needed (they will be passed as `JobConfig`) -- returns a `lab.JobResult`. - The `lab.JobResult` should contain the result of the job function and the type of the result. - -For instance, to define a simple job that checks if a message contains a forbidden word, you can create a Job function like this: - -```python -from phospho import lab -from typing import List -import re - -def my_custom_job(message: lab.Message, forbidden_words: List) -> lab.JobResult: - """ - For each each message, me will check if the forbidden words are present in the message. - The function will return a JobResult with a boolean value - (True if one of the words is present, False otherwise). - """ - - pattern = r'\b(' + '|'.join(re.escape(word) for word in forbidden_words) + r')\b' - - # Use re.search() to check if any of the words are in the text - if re.search(pattern, message.content): - result = True - else: - result = False - - return lab.JobResult( - job_id="my_custom_job", - result_type=lab.ResultType.bool, - value=result, - ) -``` - -## Running a custom job - -Once you have defined your custom job function, you can create a Job in your workload that will run this job function on your messages. - -You need to pass the function in the `job_function` of the `lab.Job` object. - -In our example: - -```python -# Create a workload in our lab -workload = lab.Workload() - -# Add our job to the workload -workload.add_job( - lab.Job( - id="regex_check", - job_function=my_custom_job, # We add our custom job function here - config=lab.JobConfig( - forbidden_words=["cat", "dog"] - ), - ) -) -``` - -This workload can then be run on your messages using the `async_run` method. - -```python -await workload.async_run( - messages=[ - # No forbiden word is present. - lab.Message( - id="message_1", - content="I like elephants.", - ), - # One forbiden word is present. - lab.Message( - id="message_2", - content="I love my cat.", - ) - ] -) - -# Let's see the results -for i in range(1, 3): - print( - f"In message {i}, a forbidden word was detected: {workload.results['message_'+str(i)]['regex_check'].value}" - ) - -# In message 1, a forbidden word was detected: False -# In message 2, a forbidden word was detected: True -``` diff --git a/phospho-mkdocs/docs/local/llm-provider.md b/phospho-mkdocs/docs/local/llm-provider.md deleted file mode 100644 index 1b3d718..0000000 --- a/phospho-mkdocs/docs/local/llm-provider.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Using a custom LLM provider -description: Have phospho work in preview without using OpenAI ---- - -phospho preview can be ran using any OpenAI compatible LLM provider. The most common ones include: - -- Mistral AI (https://mistral.ai/) -- Ollama (https://ollama.com/) -- vLLM (https://docs.vllm.ai/) -- and many others diff --git a/phospho-mkdocs/docs/local/optimize.md b/phospho-mkdocs/docs/local/optimize.md deleted file mode 100644 index 287149b..0000000 --- a/phospho-mkdocs/docs/local/optimize.md +++ /dev/null @@ -1,193 +0,0 @@ ---- -title: Optimize Jobs -description: You can use the built in optimizer to find the optimal model and hyperparameters for your jobs. ---- - -In this guide, we will use the `lab`from the `phospho `package to run an event extraction task on a dataset. -First, we will run on a subset of the dataset with several models: - -- the OpenAI API -- the Mistral AI API -- a local Ollama model - -Then, we will use the `lab` optimizer to find the best model and hyperparameters for the task in term of performance, speed and price. - -Finally, we will use the `lab` to run the best model on the full dataset and compare the results with the subset. - -Feel free to only use the APIs or Ollama models you want. - -## Installation and setup - -You will need: - -- an OpenAI API key (find yours [here](https://platform.openai.com/api-keys)) -- a Mistral AI API key (find yours [here](https://console.mistral.ai/api-keys/)) -- Ollama running on your local machine, with the Mistral 7B model installed. You can find the installation instructions for Ollama [here](https://ollama.com) - -``` -pip install --upgrade phospho -``` - -### (Optional) Install Ollama - -If you want to use Ollama, install the [Ollama app](https://ollama.com) on your desktop, launch it, and install the python package to interact with it: - -``` -pip install ollama -``` - -Test your installation by running the following script: - -```python -import ollama - -try: - # Let's check we can reach your local Ollama API - response = ollama.chat(model='mistral', messages=[ - { - 'role': 'user', - 'content': 'What is the best French cheese? Keep your answer short.', - }, - ]) - print(response['message']['content']) -except Exception as e: - print(f"Error: {e}") - print("You need to have a local Ollama server running to continue and the mistral model downloaded. \nRemove references to Ollama otherwise.") -``` - -## Define the phospho workload and jobs - -```python -from phospho import lab -from typing import Literal - -# Create a workload in our lab -workload = lab.Workload() - -# Setup the configs for our job -# Model are ordered from the least desired to the most desired -class EventConfig(lab.JobConfig): - event_name: str - event_description: str - model_id: Literal["openai:gpt-4", "mistral:mistral-large-latest", "mistral:mistral-small-latest", "ollama:mistral-7B"] = "openai:gpt-4" - -# Add our job to the workload -workload.add_job( - lab.Job( - name="sync_event_detection", - id="question_answering", - config=EventConfig( - event_name="Question Answering", - event_description="User asks a question to the assistant", - model_id="openai:gpt-4" - ) - ) -) -``` - -# Loading a message dataset - -Let's load a dataset of messages from huggingface, so we can run our extraction job on it. - -```bash -pip install datasets -``` - -```python -from datasets import load_dataset - -dataset = load_dataset("daily_dialog") - -# Generate a sub dataset with 30 messages -sub_dataset = dataset["train"].select(range(30)) - -# Let's print one of the messages -print(sub_dataset[0]["dialog"][0]) - -# Build the message list for our lab -messages = [] -for row in sub_dataset: - text = row["dialog"][0] - messages.append(lab.Message(content=text)) - -# Run the lab on it -# The job will be run with the default model (openai:gpt-3.5-turbo) -workload_results = await workload.async_run(messages=messages, executor_type="parallel") - -# Compute alternative results with the Mistral API and Ollama -await workload.async_run_on_alternative_configurations(messages=messages, executor_type="parallel") -``` - -### Apply the optimizer to the pipeline - -For the purpose of this demo, we consider a considertion good enough if it matches gpt-4 on at least 80% of the dataset. Good old Paretto. - -You can check the current configuration of the workload with: - -```python -workload.jobs[0].config.model_id -``` - -To run the optimizer, just run the following: - -```python -workload.optimize_jobs(accuracy_threshold=0.8) - -# let's check the new model_id (if it has changed) -workload.jobs[0].config.model_id -``` - -For us, `mistral:mistral-small-latest` was selected. - -## Run our workload on the full dataset, with optimized parameters - -We can now run the workload on the full dataset, with the optimized model. - -```python -sub_dataset = dataset["train"] # Here you can limit the dataset to a subset if you want to test faster and cheaper - -# Build the message list for our lab -messages = [] -for row in sub_dataset: - text = row["dialog"][0] - messages.append(lab.Message(content=text)) - -# The job will be run with the best model (mistral:mistral-small-latest in our case) -workload_results = await workload.async_run(messages=messages, executor_type="parallel") -``` - -## Analyze the results - -```python -boolean_result = [] - -# Go through the dict -for key, value in workload_results.items(): - result = value['question_answering'].value - boolean_result.append(result) - -# Let's count the number of True and False -true_count = boolean_result.count(True) -false_count = boolean_result.count(False) - -print(f"In the dataset, {true_count/len(boolean_result)*100}% of the messages are a question. The rest are not.") -``` - -In our case: - -``` -In the dataset, 44.5% of the messages are a question. The rest are not. -``` - -## Going further - -You can use the `lab` to run other tasks, such as: - -- Named Entity Recognition -- Sentiment Analysis -- Evaluations -- And more! - -You can also play around with differnet models, different hyperparameters, and different datasets. - -You want to have such analysis on your own LLM app, in real time? Check out the cloud hosted version of phospho, available on [phospho.ai](https://phospho.ai) diff --git a/phospho-mkdocs/docs/local/quickstart.md b/phospho-mkdocs/docs/local/quickstart.md deleted file mode 100644 index b2fc2a5..0000000 --- a/phospho-mkdocs/docs/local/quickstart.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: Quickstart -description: Run evaluations and detect events in your messages in minutes. ---- - -Get started with **phospho lab**, the core of phospho. This is what the hosted version of phospho leverages to deliver insights. - -![phospho diagram](https://github.com/phospho-app/phospho/raw/dev/phospho_diagram.png) - -!!! note - Looking to setup logging to the phospho hosted version? [Read this guide instead.](/docs/getting-started) - -The **phospho lab** is a tool that allows you to run evaluations and detect events in your messages. - -1. Define custom workloads and jobs -2. Run them on your messages in parallel -3. Optimize your models and configurations - -## Installation - -Install the phospho package with the `lab` extra: - -```bash -pip install "phospho[lab]" -``` - -You need to set your OPENAI_API_KEY as an environment variable. - -```bash -export OPENAI_API_KEY=your_openai_api_key -``` - -If you don't want to use OpenAI, you can setup [Ollama](https://github.com/ollama/ollama) and set the following environment variables: - -```bash -export OVERRIDE_WITH_OLLAMA_MODEL=mistral -``` - -This will replace all calls to OpenAI models with calls to the `mistral` model running with Ollama. Make sure you've downloaded Item. - -## Create a workload - -The phospho lab lets you run extractions on your messages. - -Start by creating a workload. A workload is a set of jobs that you want to run on your messages. - -```python -from phospho import lab - -# Create the phospho workload -workload = lab.Workload() -``` - -## Define jobs - -Define jobs and add them to the workload. For example, let's add an event detection job. Those are the jobs you can setup in phospho cloud. - -```python -# Define the job configurations -class EventConfig(lab.JobConfig): - event_name: str - event_description: str - -# Let's add an event detection task to our workload -workload.add_job( - lab.Job( - id="question_answering", - job_function=lab.job_library.event_detection, - config=EventConfig( - event_name="question_answering", - event_description="The user asks a question to the assistant", - ), - ) - ) -``` - -## Run the workload - -Now, you can run the workload on your messages. - -Messages are a basic abstraction. They can be user messages or LLM outputs. They can contain metadata or additional information. It's up to the jobs to decide what to do with them. - -```python -# Let's add some messages to analyze -message = lab.Message( - id="my_message_id", - role="User", - content="What is the weather today in Paris?", - ) - -# Run the workload on the message -#ย Note that this is an async function. Use asyncio.run to run it in a script. -await workload.async_run( - messages=[message], - executor_type="sequential", - ) -``` - -## Gather results - -Results are stored in the workload. - -```python -# Check the results of the workload -message_results = workload.results["my_message_id"] - -print(f"Result of the event detection: {message_results['question_answering'].value}") -``` - -You can also get them in a pandas dataframe. - -```python -workload.results_df() -``` \ No newline at end of file diff --git a/phospho-mkdocs/docs/models/classify.md b/phospho-mkdocs/docs/models/classify.md deleted file mode 100644 index 8ac0d2b..0000000 --- a/phospho-mkdocs/docs/models/classify.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -title: Classification -description: "Train, use and download classification models" ---- - - - Request access to the preview by contacting us at contact@phospho.ai - - -phospho can handle all the data processing, data engineering and model training for you. -For now, only binary classification models are supported (learn [here](https://en.wikipedia.org/wiki/Binary_classification) what binary classification is). - -# Why train your custom classification model? - -Most LLM chains involve classification steps where the LLM is prompted with a classification task. -Training your own classification model can help you to: - -- improve the accuracy of the classification -- reduce the latency of the classification (as you have the model running in the application code) -- reduce the cost of the classification (as you don't have to call an external LLM API) -- reduce risks of downtime (as you don't depend on an external LLM API) - -# Available models - -`phospho-small` is a small text classification model that can be trained with a few examples (minimum 20 examples). -It runs on CPU and once trained using phospho, you can download your trained model from Hugging Face. - -# Train a model on your data - -To train a model, you need to provide a list of examples for the modelat least 20 examples containing text, labels and a label description. -Each example should have the following fields: - -- `text` (str): the text to classify (for example, a user message) -- `label` (bool): True or False according to the classification -- `label_text` (str): a few word description of the label when true (for example, "user asking for pricing") - -For example, your examples could look like this: - -```json -[ - { - "text": "Can I have a discount on phospho pro?", - "label": true, - "label_text": "user asking for pricing" - }, - { - "text": "I want to know more about phospho pro", - "label": false, - "label_text": "user asking for pricing" - }, - ... - ] -``` - -Start the training using the following API call or python code snippet: - -=== "API" - - ```bash HTTP API - - curl -X 'POST' \ - 'https://api.phospho.ai/v2/train' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' \ - -H 'Content-Type: application/json' \ - -d '{ - "model": "phospho-small", - "examples": [ - { - "text": "How much is phospho pro?", - "label": true, - "label_text": "user asking for pricing" - }, - { - "text": "I want to know more about phospho pro", - "label": false, - "label_text": "user asking for pricing" - }, - ... - ], - "task_type": "binary-classification" - }' - ``` - -=== "Python" - - ```python - import phospho - - phospho.init() - - my_examples = [ - { - "text": "How much is phospho pro?", - "label": True, - "label_text": "user asking for pricing" - }, - { - "text": "I want to know more about phospho pro", - "label": False, - "label_text": "user asking for pricing" - }, - ... - ] - - model = phospho.train("phospho-small", my_examples) - - print(model) - ``` - -You will get a model object in the response. You will need the `model_id` to use the model. It should look like this: `phospho-small-8963ba3`. - -```json -{ - "id": "YOUR_MODEL_ID", - "created_at": 1714418246, - "status": "training", - "owned_by": "YOUR_ORG_ID", - "task_type": "binary-classification", - "context_window": 514 -} -``` - -The training will take a few minutes. You can check the status of the model using the following API call: - -=== "API" - - ```bash - - curl -X 'GET' \ - 'https://api.phospho.ai/v2/models/YOUR_MODEL_ID' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' - ``` - - ```python Python - import requests - import os - - model_id = "YOUR_MODEL_ID" # model["id"] if you run the above code - url = f"https://api.phospho.ai/v2/models/{model_id}" - - headers = {"accept": "application/json", - "Content-Type": "application/json", - "Authorization": f"Bearer {os.environ['PHOSPHO_API_KEY']}" - } - - response = requests.get(url, headers=headers) - - print(response.text) - ``` - -Your model will be ready when the status will changed from `training` to `trained`. - -## Use the model - -You can use the model 2 ways: - -- directly download it from Hugging Face (`phospho-small` runs on CPU) -- through the phospho API - -### Download and use locally your model (recommended for production) - -You can download the model from phospho Hugging Face repo. The model id is the same as the one you got when training the model. - -For example, if the model id is `phospho-small-8963ba3`, you can download the model from Hugging Face with the id `phospho-app/phospho-small-8963ba3`. - -Then you can use the model like any other Hugging Face model: - -```python -from setfit import SetFitModel - -model = SetFitModel.from_pretrained("phospho-app/phospho-small-8963ba3") - -outputs = model.predict(["This is a sentence to classify", "Another sentence"]) -``` - -Make sure to have enough RAM to load the model and the tokenizer in memory. The model is 420MB. - -### Use the model through the API - - - {" "} - AI Models predict endpoints are in preview and not yet ready for production trafic. - - -To use the model through the API, you need to send a POST request to the `/predict` endpoint with the model id and the batch of text to classify. -If it's the first request you send, you might experience a delay as the model is loaded in memory. - -=== "API" - -```bash API - - curl -X 'POST' \ - 'https://api.phospho.ai/v2/predict' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' \ - -H 'Content-Type: application/json' \ - -d '{ - "inputs": [ - "Can I have a discount on phospho pro?" - ], - "model": "YOUR_MODEL_ID" - }' -``` - -```python Python -# Coming soon! -``` - -## List your models - -You can also list all the models you have have access to and that can accept requests: - -=== "API" - - ```bash HTTP API - - curl -X 'GET' \ - 'https://api.phospho.ai/v2/models' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer $PHOSPHO_API_KEY' - ``` - - ```python Python - # Coming soon! - ``` diff --git a/phospho-mkdocs/docs/models/embeddings.md b/phospho-mkdocs/docs/models/embeddings.md deleted file mode 100644 index d08dfd6..0000000 --- a/phospho-mkdocs/docs/models/embeddings.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: Intent Embeddings -description: "Generate specific embeddings with phospho" ---- - -!!! note -This model is in preview. Contact us for production or latency sensitive -specs. - -You can generate embeddings for text using the `intent-embed` model. Intent Embed is a mdoel that generates embeddings for text, specifically to represent the user intent. Potential use cases include: - -- User Intent classification -- Intent similarity -- Out of topic exclusion -- Intent clustering and analytics -- And more - -Read the technical paper here: [Phospho Intent Embeddings](https://research.phospho.ai/phospho_intent_embed.pdf). - -# Requirements - -Create an account on [phospho.ai](https://platform.phospho.ai) and get your API key. -You need to have setup a billing method. You can add a it in the Settings of your dashboard [here](https://platform.phospho.ai/org/settings/billing). - -# Usage - -## Using the OpenAI client - -The phospho embedding endpoint is OpenAI compatible. You can use the OpenAI client to send requests to the phospho API. - -```python - -from openai import OpenAI - -client = OpenAI( - api_key="YOUR_PHOSPHO_API_KEY", - base_url="https://api.phospho.ai/v2", -) - -response = client.embeddings.create( - model="intent-embed", - input="I want to use the phospho intent embeddings api", - encoding_format="float", -) - -print(response) - -``` - -For now, the input must be a single string. Passing more than one string will result in an error. - -## Using the API directly - -To send a request, add: - -- `text`: The text to embed, usually a user query or message. -- `model`: must be set to `intent-embed`. - -Optionally, to link this embedding to one of your projects, you can specify the following optional parameters: - -- `project_id`: The project id you want to link this embedding to. - -=== "API" - -```bash -curl -X 'POST' \ - 'https://api.phospho.ai/v2/embeddings' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer YOUR_PHOSPHO_API_KEY' \ - -H 'Content-Type: application/json' \ - -d '{ - "input": "Your text to embed here", - "model": "intent-embed" -}' -``` - -=== "Python" - -``` -import requests - -url = 'https://api.phospho.ai/v2/predict' -headers = { - 'accept': 'application/json', - 'Authorization': 'Bearer YOUR_PHOSPHO_API_KEY', - 'Content-Type': 'application/json' -} -data = { - "input": "Your text to embed here", - "model": "intent-embed" -} - -response = requests.post(url, json=data, headers=headers) - -print(response.json()['embeddings']) - -``` - -You will get a response with the embeddings for the input text. The embeddings are a list of floats. - -```json -{ - "object": "list", - "data": [ - { - "object": "embedding", - "embedding": [ - -0.045429688, - -0.039863896, - 0.0077658836, - ...], - "index": 0 - } - ], - "model": "intent-embed", - "usage": { - "prompt_tokens": 3, - "total_tokens": 3 - } -} -``` - -These embeddings can stored in vector databases like Pinecone, Milvus, Chroma, Qdrand, etc. for similarity search, clustering, and other analytics. - -# Pricing - -The pricing is based on the number of tokens in the input text. - -**Note:** You need to have a billing method setup to use the model. Acces your [billing portal](https://platform.phospho.ai/org/settings/billing) to add one. - -| Model name | Price per 1M input tokens | -| -------------- | ------------------------- | -| `intent-embed` | $0.94 | - -!!! info -You are billed in \$1 increment. - -[Contact us](mailto:contact@phospho.ai) for high volume pricing. diff --git a/phospho-mkdocs/docs/models/llm.md b/phospho-mkdocs/docs/models/llm.md deleted file mode 100644 index a373077..0000000 --- a/phospho-mkdocs/docs/models/llm.md +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: LLMs -description: "Call LLMs through the phospho proxy" ---- - -!!! note - Access to this feature is restricted. Contact us at contact@phospho.ai to - request access. - -To access any model through the phospho proxy, you need to have a phospho API key and a project on the phospho platform. You can get one by signing up on [phospho.ai](https://platform.phospho.ai). - -To access the Tak API, please refer to the [Tak API page](/docs/models/tak). - -# OpenAI - -The phospho proxy is OpenAI compatible. You can use the OpenAI client to send requests to the phospho API. Messages sent through the phospho proxy will appear in your phospho dashboard. - -Available models: - -- `gpt-4o` -- `gpt-4o-mini` - -To access these models through the phospho proxy, you need to: - -- set the base_url to `https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/` (instead of `https://api.openai.com/v1/`) -- set the OPENAI_API_KEY to your phospho API key -- set the model to the desired model with the prefix `openai:` ( e.g. `openai:gpt-4o` or `openai:gpt-4o-mini`) - -=== "Open AI Python" - - ```python openai python sdk - import openai - - from openai import OpenAI - client = OpenAI(api_key="PHOSPHO_API_KEY", base_url="https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/") - - completion = client.chat.completions.create( - model="openai:gpt-4o", - messages=[ - {"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": "Hello!"} - ] - ) - - print(completion.choices[0].message) - - ``` - -=== "API" - - ```bash curl - curl https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/chat/completions \ - -H "Content-Type: application/json" \ - -H "Authorization: Bearer $PHOSPHO_API_KEY" \ - -d '{ - "model": "openai:gpt-4o", - "messages": [ - { - "role": "system", - "content": "You are a helpful assistant." - }, - { - "role": "user", - "content": "Hello!" - } - ] - }' - ``` - - ```javascript openai javascript sdk - // Same as for the python SDK - ``` - -# Mistral ai - -The phospho proxy is Mistral ai compatible. You can use the Mistral client to send requests to the phospho API. Messages sent through the phospho proxy will appear in your phospho dashboard. - -Available models: - -- `mistral-small-latest` -- `mistral-small-latest` - -To access these models through the phospho proxy, you need to: - -- set the server_url to `https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/` -- set the MISTRAL_API_KEY to your phospho API key -- set the model to the desired model with the prefix `mistral:` ( e.g. `mistral:mistral-large-latest` or `mistral:mistral-small-latest`) - -=== "Mistral ai Python" - -```python -import mistralai - -from mistralai import Mistral -client = Mistral(api_key="PHOSPHO_API_KEY", server_url="https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/") - -completion = client.chat.complete( - model="mistral:mistral-large-latest", - messages=[ - {"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": "Hello!"} - ] -) - -print(completion.choices[0].message) - -``` - -=== "API" - -```bash -curl https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/v1/chat/completions \ - -H "Content-Type: application/json" \ - -H "Authorization: Bearer $PHOSPHO_API_KEY" \ - -d '{ - "model": "mistral:mistral-large-latest", - "messages": [ - { - "role": "system", - "content": "You are a helpful assistant." - }, - { - "role": "user", - "content": "Hello!" - } - ] - }' -``` - -```javascript mistralai javascript sdk -// Same as for the python SDK -``` - - -# Anthropic - -Docs coming soon. diff --git a/phospho-mkdocs/docs/models/multimodal.md b/phospho-mkdocs/docs/models/multimodal.md deleted file mode 100644 index aa38572..0000000 --- a/phospho-mkdocs/docs/models/multimodal.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Multimodal LLM -description: "Enable your LLM app to understand any images" ---- - -Enable your LLM app to understand images with the phospho multimodal model. -For optimal performance, this model is not censored or moderated. Ensuring this model is used in a safe way is your responsability. - -# Requirements - -Create an account on [phospho.ai](https://platform.phospho.ai) and get your API key. -You need to have setup a billing method. You can add a it in the Settings of your dashboard [here](https://platform.phospho.ai/org/settings/billing). - -# Sending a request - -To send a request, add: - -- `text`: your text prompt. For instance: "What is this?" -- `image_url`: either a URL of the image or the base64 encoded image data. - The `inputs` list must be of lenght 1. - -Optionally, to better control the generation, you can specify the following optional parameters: - -- `max_new_tokens` (int): default to 200. Max 250. The maximum number of tokens that can be generated in the response. -- `temperature` (float, between 0.1 and 1.0) Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -- `repetition_penalty` (float): Default to 1.15. This parameter helps in reducing the repetition of words in the generated content. -- `top_p` (float, between 0.0 and 1.0): Default to 1.0. This parameter controls the diversity of the response by limiting the possible next tokens to the top p percent most likely. - -If you pass a URL, make sure it is a generally available image (for instance by passing the link in a Private Navigation window). -To encode an image in base 64, you can use [this website](https://base64.guru/converter/encode/image). - -=== "API" - -```bash - -curl -X 'POST' \ - 'https://api.phospho.ai/v2/predict' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer YOUR_PHOSPHO_API_KEY' \ - -H 'Content-Type: application/json' \ - -d '{ - "inputs": [{"text": "What is this?", "image_url": "http://images.cocodataset.org/val2017/000000039769.jpg"}], - "model": "phospho-multimodal" -}' -``` - -=== "Python" - -```python -import requests - -url = 'https://api.phospho.ai/v2/predict' -headers = { - 'accept': 'application/json', - 'Authorization': 'Bearer YOUR_PHOSPHO_API_KEY', - 'Content-Type': 'application/json' -} -data = { - "inputs": [{"text": "What is this?", "image_url": "http://images.cocodataset.org/val2017/000000039769.jpg"}], - "model": "phospho-multimodal" -} - -response = requests.post(url, json=data, headers=headers) - -print(response.json()['predictions'][0]['description']) - -``` - -!!! note -This API endpoint is for preview and not optimal for production scale serving. -Contact us for on premise deployment or high performance endpoints. - -# Pricing - -The pricing is based on the number of images sent. - -**Note:** You need to have a billing method setup to use the model. Acces your [billing portal](https://platform.phospho.ai/org/settings/billing) to add one. - -| Model name | Price per 100 images | Price per 1000 images | -| -------------------- | -------------------- | --------------------- | -| `phospho-multimodal` | $1 | $10 | - -!!! info -You are billed in \$1 increment. - - _Example: if you send 150 images, you will be billed \$2._ - -[Contact us](mailto:contact@phospho.ai) for high volume pricing. diff --git a/phospho-mkdocs/docs/models/tak.md b/phospho-mkdocs/docs/models/tak.md deleted file mode 100644 index 1027d1d..0000000 --- a/phospho-mkdocs/docs/models/tak.md +++ /dev/null @@ -1,107 +0,0 @@ ---- -title: Tak API -description: "Call Tak through API" ---- - -!!! note - Access to this feature is restricted. Contact us at contact@phospho.ai to - request access. - -Please note that the version available via API is different from the one available online at [tak.phospho.ai](https://tak.phospho.ai). - -To access the API, you need to have a phospho API key and a project on the phospho platform. You can get one by signing up on [phospho.ai](https://platform.phospho.ai). - -The tak API endpoint is OpenAI compatible. You can use the OpenAI client to send requests to the tak API. Messages sent will appear in your phospho dashboard. - -Available models: - -- `tak-large`: leverages GPT-4o, can search the web and the news. - -## Capabilities - -Tak can search the web and the news to provide up to date information on a wide range of topics. -It can also perform standard LLM tasks such as summarization, translation, and question answering. -Answers are formated in markdown and contains the sources of the information (link in Markdown format). - -Tak can handle tasks requiring multiple web searches in a single query, such as: `What is NVIDIA current stock price? And what is Apple stock price?` - -Streaming is supported. - -## Limits - -The default rate limit is 500 requests per minute. The maximum context window is 128k tokens. - -## Sending requests - -To send requests, you need to: - -- set the base_url to `https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/` (instead of `https://api.openai.com/v1/`) -- set the OPENAI_API_KEY to your phospho API key -- set the model to the desired model to `phospho:tak-large` -- no need to specify a `system` message. If you add one, it won't be followed. - -=== "OpenAI Python SDK" - - ```python - import openai - - from openai import OpenAI - client = OpenAI(api_key="PHOSPHO_API_KEY", base_url="https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/") - - completion = client.chat.completions.create( - model="phospho:tak-large", - messages=[ - {"role": "user", "content": "What are the latest AI news in France?"} - ] - ) - - print(completion.choices[0].message) - - # Or with streaming - - response = client.chat.completions.create( - model='phospho:tak-large', - messages=[ - {'role': 'user', 'content': "Count to 10"} - ], - temperature=0, - stream=True # this time, we set stream=True - ) - - for chunk in response: - print(chunk.choices[0].delta.content, end="", flush=True) - - ``` - -=== "API" - - ```bash - curl https://api.phospho.ai/v2/{YOUR_PHOSPHO_PROJECT_ID}/chat/completions \ - -H "Content-Type: application/json" \ - -H "Authorization: Bearer $PHOSPHO_API_KEY" \ - -d '{ - "model": "openai:gpt-4o", - "messages": [ - { - "role": "user", - "content": "What are the latest AI news in France?" - } - ] - }' - ``` - -=== "JavaScript OpenAI SDK" - - ```javascript openai javascript sdk - // Same as for the python SDK - ``` - -# Pricing - -The pricing is based on the number of tokens in input messages and output completion. - -**Note:** You need to have a billing method setup to use the model. Acces your [billing portal](https://platform.phospho.ai/org/settings/billing) to add one. - -| Model name | Price per 1M input tokens | Price per 1M output tokens | -| ----------- | ------------------------- | -------------------------- | -| `tak-large` | $5 | $20 | diff --git a/phospho-mkdocs/docs/self-hosting.md b/phospho-mkdocs/docs/self-hosting.md deleted file mode 100644 index 7572baa..0000000 --- a/phospho-mkdocs/docs/self-hosting.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Self-hosting -description: "Host phospho on your own infrastructure using phospho open source" ---- - -The phospho platform can be hosted on your own infrastructure. [The code is open source and available here.](https://github.com/phospho-app/phospho) - -This is useful if you want to keep your data private or if you have specific data compliance requirements. - -## How to deploy phospho with Docker? - -The platform can be deployed using Docker. Start by cloning the phospho repository. - -```bash -git clone https://github.com/phospho-app/phospho.git -``` - -Once the environment variables are set up, you can then use [Docker compose](https://docs.docker.com/compose/intro/features-uses/) to quickly build and deploy the platform. - -```bash -docker compose up -``` - -Please follow [this guide for the complete instructions](https://github.com/phospho-app/phospho/blob/dev/DeploymentGuide.md) on how to setup environment variables. - -## How to deploy phospho on the Cloud? - -phospho is compatible with any cloud provider thanks to its container-based architecture. - -- Google Cloud platform (feel free to refer to the [deployment scripts here](https://github.com/phospho-app/phospho/tree/dev/.github/workflows)) -- Microsoft Azure -- Amazon Web Services - -To get started easily, we recommend you use [Porter.run](https://docs.porter.run/introduction). - -### Contact us - -To get help, feel free to reach out at [contact@phospho.ai](mailto:contact@phospho.ai) diff --git a/phospho-mkdocs/mkdocs.yml b/phospho-mkdocs/mkdocs.yml deleted file mode 100755 index 3938762..0000000 --- a/phospho-mkdocs/mkdocs.yml +++ /dev/null @@ -1,140 +0,0 @@ -site_name: phospho platform docs -site_url: https://phospho-app.github.io/docs/ -repo_url: https://github.com/phospho-app/phospho - - -theme: - name: material - icon: - logo: fontawesome/solid/vial - favicon: favicon.png - features: - - content.tabs.link - - navigation.tabs - palette: - # Palette toggle for dark mode - - scheme: default - primary: green - accent: teal - toggle: - icon: material/brightness-7 - name: Switch to dark mode - - # Palette toggle for light mode - - scheme: slate - primary: green - accent: teal - toggle: - icon: material/brightness-4 - name: Switch to light mode - - -plugins: - - glightbox - -markdown_extensions: - - pymdownx.highlight: - anchor_linenums: true - line_spans: __span - pygments_lang_class: true - - pymdownx.inlinehilite - - pymdownx.snippets - - pymdownx.superfences - - pymdownx.tabbed: - alternate_style: true - - attr_list - - md_in_html - - pymdownx.emoji: - emoji_index: !!python/name:material.extensions.emoji.twemoji - emoji_generator: !!python/name:material.extensions.emoji.to_svg - - def_list - - pymdownx.tasklist: - custom_checkbox: true - - admonition - - pymdownx.details - - pymdownx.blocks.caption - -extra: - social: - - icon: fontawesome/brands/github - link: https://github.com/phospho-app/ - - icon: fontawesome/brands/x-twitter - link: https://x.com/phospho_ai - - icon: fontawesome/brands/linkedin - link: https://www.linkedin.com/company/phospho-app/posts/?feedView=all - - icon: fontawesome/brands/discord - link: https://discord.gg/m8wzBGQA55 - -nav: - - Platform: - - index.md - - getting-started.md - - Import data: - - import-data/import-file.md - - import-data/api-integration.md - - import-data/import-langsmith.md - - import-data/import-langfuse.md - - import-data/tracing.md - - self-hosting.md - - cli.md - - - Analytics: - - analytics/ab-test.md - - analytics/events.md - - analytics/tagging.md - - analytics/clustering.md - - User analytics: - - analytics/sessions-and-users.md - - analytics/language.md - - analytics/sentiment-analysis.md - - analytics/user-feedback.md - - analytics/usage-based-billing.md - - - Connectors: - - Python: - - integrations/python/logging.md - - Logging examples in Python: - - integrations/python/examples/openai-agent.md - - integrations/python/examples/openai-streamlit.md - - integrations/python/analytics.md - - Discover the Lab: - - local/quickstart.md - - local/custom-job.md - - local/optimize.md - - local/llm-provider.md - - integrations/python/testing.md - - integrations/javascript/logging.md - - Langchain & Langsmith: - - integrations/langchain.md - - import-data/import-langsmith.md - - import-data/import-langfuse.md - - integrations/supabase.md - - - Integrations: - - integrations/argilla.md - - integrations/postgresql.md - - integrations/powerbi.md - - integrations/python/analytics.md - - - Models API: - - models/embeddings.md - - models/llm.md - - models/tak.md - - - Guides: - - guides/welcome-guide.md - - guides/getting-started.md - - guides/LLM-judge.md - - guides/user-intent.md - - guides/understand-your-data.md - - guides/export-dataset-argilla.md - - - API Reference: - - api-reference/introduction.md - - - Examples: - - examples/introduction.md - - - Go to Platform: - - https://platform.phospho.ai - diff --git a/phospho-mkdocs/pyproject.toml b/phospho-mkdocs/pyproject.toml deleted file mode 100644 index 1a46896..0000000 --- a/phospho-mkdocs/pyproject.toml +++ /dev/null @@ -1,11 +0,0 @@ -[project] -name = "mkdocs_phospho" -version = "0.1.0" -description = "phospho docs" -readme = "README.md" -requires-python = ">=3.11" -dependencies = [ - "mkdocs-material>=9.5.49", - "mkdocs>=1.6.1", - "mkdocs-glightbox>=0.4.0", -] diff --git a/phospho-mkdocs/uv.lock b/phospho-mkdocs/uv.lock deleted file mode 100644 index c50b0a2..0000000 --- a/phospho-mkdocs/uv.lock +++ /dev/null @@ -1,513 +0,0 @@ -version = 1 -requires-python = ">=3.11" - -[[package]] -name = "babel" -version = "2.16.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/2a/74/f1bc80f23eeba13393b7222b11d95ca3af2c1e28edca18af487137eefed9/babel-2.16.0.tar.gz", hash = "sha256:d1f3554ca26605fe173f3de0c65f750f5a42f924499bf134de6423582298e316", size = 9348104 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/ed/20/bc79bc575ba2e2a7f70e8a1155618bb1301eaa5132a8271373a6903f73f8/babel-2.16.0-py3-none-any.whl", hash = "sha256:368b5b98b37c06b7daf6696391c3240c938b37767d4584413e8438c5c435fa8b", size = 9587599 }, -] - -[[package]] -name = "certifi" -version = "2024.12.14" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/0f/bd/1d41ee578ce09523c81a15426705dd20969f5abf006d1afe8aeff0dd776a/certifi-2024.12.14.tar.gz", hash = "sha256:b650d30f370c2b724812bee08008be0c4163b163ddaec3f2546c1caf65f191db", size = 166010 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/a5/32/8f6669fc4798494966bf446c8c4a162e0b5d893dff088afddf76414f70e1/certifi-2024.12.14-py3-none-any.whl", hash = "sha256:1275f7a45be9464efc1173084eaa30f866fe2e47d389406136d332ed4967ec56", size = 164927 }, -] - -[[package]] -name = "charset-normalizer" -version = "3.4.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/16/b0/572805e227f01586461c80e0fd25d65a2115599cc9dad142fee4b747c357/charset_normalizer-3.4.1.tar.gz", hash = "sha256:44251f18cd68a75b56585dd00dae26183e102cd5e0f9f1466e6df5da2ed64ea3", size = 123188 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/72/80/41ef5d5a7935d2d3a773e3eaebf0a9350542f2cab4eac59a7a4741fbbbbe/charset_normalizer-3.4.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:8bfa33f4f2672964266e940dd22a195989ba31669bd84629f05fab3ef4e2d125", size = 194995 }, - { url = "https://files.pythonhosted.org/packages/7a/28/0b9fefa7b8b080ec492110af6d88aa3dea91c464b17d53474b6e9ba5d2c5/charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28bf57629c75e810b6ae989f03c0828d64d6b26a5e205535585f96093e405ed1", size = 139471 }, - { url = "https://files.pythonhosted.org/packages/71/64/d24ab1a997efb06402e3fc07317e94da358e2585165930d9d59ad45fcae2/charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f08ff5e948271dc7e18a35641d2f11a4cd8dfd5634f55228b691e62b37125eb3", size = 149831 }, - { url = "https://files.pythonhosted.org/packages/37/ed/be39e5258e198655240db5e19e0b11379163ad7070962d6b0c87ed2c4d39/charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:234ac59ea147c59ee4da87a0c0f098e9c8d169f4dc2a159ef720f1a61bbe27cd", size = 142335 }, - { url = "https://files.pythonhosted.org/packages/88/83/489e9504711fa05d8dde1574996408026bdbdbd938f23be67deebb5eca92/charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd4ec41f914fa74ad1b8304bbc634b3de73d2a0889bd32076342a573e0779e00", size = 143862 }, - { url = "https://files.pythonhosted.org/packages/c6/c7/32da20821cf387b759ad24627a9aca289d2822de929b8a41b6241767b461/charset_normalizer-3.4.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eea6ee1db730b3483adf394ea72f808b6e18cf3cb6454b4d86e04fa8c4327a12", size = 145673 }, - { url = "https://files.pythonhosted.org/packages/68/85/f4288e96039abdd5aeb5c546fa20a37b50da71b5cf01e75e87f16cd43304/charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c96836c97b1238e9c9e3fe90844c947d5afbf4f4c92762679acfe19927d81d77", size = 140211 }, - { url = "https://files.pythonhosted.org/packages/28/a3/a42e70d03cbdabc18997baf4f0227c73591a08041c149e710045c281f97b/charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:4d86f7aff21ee58f26dcf5ae81a9addbd914115cdebcbb2217e4f0ed8982e146", size = 148039 }, - { url = "https://files.pythonhosted.org/packages/85/e4/65699e8ab3014ecbe6f5c71d1a55d810fb716bbfd74f6283d5c2aa87febf/charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:09b5e6733cbd160dcc09589227187e242a30a49ca5cefa5a7edd3f9d19ed53fd", size = 151939 }, - { url = "https://files.pythonhosted.org/packages/b1/82/8e9fe624cc5374193de6860aba3ea8070f584c8565ee77c168ec13274bd2/charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:5777ee0881f9499ed0f71cc82cf873d9a0ca8af166dfa0af8ec4e675b7df48e6", size = 149075 }, - { url = "https://files.pythonhosted.org/packages/3d/7b/82865ba54c765560c8433f65e8acb9217cb839a9e32b42af4aa8e945870f/charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:237bdbe6159cff53b4f24f397d43c6336c6b0b42affbe857970cefbb620911c8", size = 144340 }, - { url = "https://files.pythonhosted.org/packages/b5/b6/9674a4b7d4d99a0d2df9b215da766ee682718f88055751e1e5e753c82db0/charset_normalizer-3.4.1-cp311-cp311-win32.whl", hash = "sha256:8417cb1f36cc0bc7eaba8ccb0e04d55f0ee52df06df3ad55259b9a323555fc8b", size = 95205 }, - { url = "https://files.pythonhosted.org/packages/1e/ab/45b180e175de4402dcf7547e4fb617283bae54ce35c27930a6f35b6bef15/charset_normalizer-3.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:d7f50a1f8c450f3925cb367d011448c39239bb3eb4117c36a6d354794de4ce76", size = 102441 }, - { url = "https://files.pythonhosted.org/packages/0a/9a/dd1e1cdceb841925b7798369a09279bd1cf183cef0f9ddf15a3a6502ee45/charset_normalizer-3.4.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:73d94b58ec7fecbc7366247d3b0b10a21681004153238750bb67bd9012414545", size = 196105 }, - { url = "https://files.pythonhosted.org/packages/d3/8c/90bfabf8c4809ecb648f39794cf2a84ff2e7d2a6cf159fe68d9a26160467/charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dad3e487649f498dd991eeb901125411559b22e8d7ab25d3aeb1af367df5efd7", size = 140404 }, - { url = "https://files.pythonhosted.org/packages/ad/8f/e410d57c721945ea3b4f1a04b74f70ce8fa800d393d72899f0a40526401f/charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c30197aa96e8eed02200a83fba2657b4c3acd0f0aa4bdc9f6c1af8e8962e0757", size = 150423 }, - { url = "https://files.pythonhosted.org/packages/f0/b8/e6825e25deb691ff98cf5c9072ee0605dc2acfca98af70c2d1b1bc75190d/charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2369eea1ee4a7610a860d88f268eb39b95cb588acd7235e02fd5a5601773d4fa", size = 143184 }, - { url = "https://files.pythonhosted.org/packages/3e/a2/513f6cbe752421f16d969e32f3583762bfd583848b763913ddab8d9bfd4f/charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc2722592d8998c870fa4e290c2eec2c1569b87fe58618e67d38b4665dfa680d", size = 145268 }, - { url = "https://files.pythonhosted.org/packages/74/94/8a5277664f27c3c438546f3eb53b33f5b19568eb7424736bdc440a88a31f/charset_normalizer-3.4.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ffc9202a29ab3920fa812879e95a9e78b2465fd10be7fcbd042899695d75e616", size = 147601 }, - { url = "https://files.pythonhosted.org/packages/7c/5f/6d352c51ee763623a98e31194823518e09bfa48be2a7e8383cf691bbb3d0/charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:804a4d582ba6e5b747c625bf1255e6b1507465494a40a2130978bda7b932c90b", size = 141098 }, - { url = "https://files.pythonhosted.org/packages/78/d4/f5704cb629ba5ab16d1d3d741396aec6dc3ca2b67757c45b0599bb010478/charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:0f55e69f030f7163dffe9fd0752b32f070566451afe180f99dbeeb81f511ad8d", size = 149520 }, - { url = "https://files.pythonhosted.org/packages/c5/96/64120b1d02b81785f222b976c0fb79a35875457fa9bb40827678e54d1bc8/charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:c4c3e6da02df6fa1410a7680bd3f63d4f710232d3139089536310d027950696a", size = 152852 }, - { url = "https://files.pythonhosted.org/packages/84/c9/98e3732278a99f47d487fd3468bc60b882920cef29d1fa6ca460a1fdf4e6/charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:5df196eb874dae23dcfb968c83d4f8fdccb333330fe1fc278ac5ceeb101003a9", size = 150488 }, - { url = "https://files.pythonhosted.org/packages/13/0e/9c8d4cb99c98c1007cc11eda969ebfe837bbbd0acdb4736d228ccaabcd22/charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e358e64305fe12299a08e08978f51fc21fac060dcfcddd95453eabe5b93ed0e1", size = 146192 }, - { url = "https://files.pythonhosted.org/packages/b2/21/2b6b5b860781a0b49427309cb8670785aa543fb2178de875b87b9cc97746/charset_normalizer-3.4.1-cp312-cp312-win32.whl", hash = "sha256:9b23ca7ef998bc739bf6ffc077c2116917eabcc901f88da1b9856b210ef63f35", size = 95550 }, - { url = "https://files.pythonhosted.org/packages/21/5b/1b390b03b1d16c7e382b561c5329f83cc06623916aab983e8ab9239c7d5c/charset_normalizer-3.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:6ff8a4a60c227ad87030d76e99cd1698345d4491638dfa6673027c48b3cd395f", size = 102785 }, - { url = "https://files.pythonhosted.org/packages/38/94/ce8e6f63d18049672c76d07d119304e1e2d7c6098f0841b51c666e9f44a0/charset_normalizer-3.4.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:aabfa34badd18f1da5ec1bc2715cadc8dca465868a4e73a0173466b688f29dda", size = 195698 }, - { url = "https://files.pythonhosted.org/packages/24/2e/dfdd9770664aae179a96561cc6952ff08f9a8cd09a908f259a9dfa063568/charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22e14b5d70560b8dd51ec22863f370d1e595ac3d024cb8ad7d308b4cd95f8313", size = 140162 }, - { url = "https://files.pythonhosted.org/packages/24/4e/f646b9093cff8fc86f2d60af2de4dc17c759de9d554f130b140ea4738ca6/charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8436c508b408b82d87dc5f62496973a1805cd46727c34440b0d29d8a2f50a6c9", size = 150263 }, - { url = "https://files.pythonhosted.org/packages/5e/67/2937f8d548c3ef6e2f9aab0f6e21001056f692d43282b165e7c56023e6dd/charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2d074908e1aecee37a7635990b2c6d504cd4766c7bc9fc86d63f9c09af3fa11b", size = 142966 }, - { url = "https://files.pythonhosted.org/packages/52/ed/b7f4f07de100bdb95c1756d3a4d17b90c1a3c53715c1a476f8738058e0fa/charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:955f8851919303c92343d2f66165294848d57e9bba6cf6e3625485a70a038d11", size = 144992 }, - { url = "https://files.pythonhosted.org/packages/96/2c/d49710a6dbcd3776265f4c923bb73ebe83933dfbaa841c5da850fe0fd20b/charset_normalizer-3.4.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:44ecbf16649486d4aebafeaa7ec4c9fed8b88101f4dd612dcaf65d5e815f837f", size = 147162 }, - { url = "https://files.pythonhosted.org/packages/b4/41/35ff1f9a6bd380303dea55e44c4933b4cc3c4850988927d4082ada230273/charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:0924e81d3d5e70f8126529951dac65c1010cdf117bb75eb02dd12339b57749dd", size = 140972 }, - { url = "https://files.pythonhosted.org/packages/fb/43/c6a0b685fe6910d08ba971f62cd9c3e862a85770395ba5d9cad4fede33ab/charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:2967f74ad52c3b98de4c3b32e1a44e32975e008a9cd2a8cc8966d6a5218c5cb2", size = 149095 }, - { url = "https://files.pythonhosted.org/packages/4c/ff/a9a504662452e2d2878512115638966e75633519ec11f25fca3d2049a94a/charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:c75cb2a3e389853835e84a2d8fb2b81a10645b503eca9bcb98df6b5a43eb8886", size = 152668 }, - { url = "https://files.pythonhosted.org/packages/6c/71/189996b6d9a4b932564701628af5cee6716733e9165af1d5e1b285c530ed/charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:09b26ae6b1abf0d27570633b2b078a2a20419c99d66fb2823173d73f188ce601", size = 150073 }, - { url = "https://files.pythonhosted.org/packages/e4/93/946a86ce20790e11312c87c75ba68d5f6ad2208cfb52b2d6a2c32840d922/charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:fa88b843d6e211393a37219e6a1c1df99d35e8fd90446f1118f4216e307e48cd", size = 145732 }, - { url = "https://files.pythonhosted.org/packages/cd/e5/131d2fb1b0dddafc37be4f3a2fa79aa4c037368be9423061dccadfd90091/charset_normalizer-3.4.1-cp313-cp313-win32.whl", hash = "sha256:eb8178fe3dba6450a3e024e95ac49ed3400e506fd4e9e5c32d30adda88cbd407", size = 95391 }, - { url = "https://files.pythonhosted.org/packages/27/f2/4f9a69cc7712b9b5ad8fdb87039fd89abba997ad5cbe690d1835d40405b0/charset_normalizer-3.4.1-cp313-cp313-win_amd64.whl", hash = "sha256:b1ac5992a838106edb89654e0aebfc24f5848ae2547d22c2c3f66454daa11971", size = 102702 }, - { url = "https://files.pythonhosted.org/packages/0e/f6/65ecc6878a89bb1c23a086ea335ad4bf21a588990c3f535a227b9eea9108/charset_normalizer-3.4.1-py3-none-any.whl", hash = "sha256:d98b1668f06378c6dbefec3b92299716b931cd4e6061f3c875a71ced1780ab85", size = 49767 }, -] - -[[package]] -name = "click" -version = "8.1.8" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "colorama", marker = "platform_system == 'Windows'" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/b9/2e/0090cbf739cee7d23781ad4b89a9894a41538e4fcf4c31dcdd705b78eb8b/click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a", size = 226593 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/7e/d4/7ebdbd03970677812aac39c869717059dbb71a4cfc033ca6e5221787892c/click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2", size = 98188 }, -] - -[[package]] -name = "colorama" -version = "0.4.6" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335 }, -] - -[[package]] -name = "ghp-import" -version = "2.1.0" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "python-dateutil" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/d9/29/d40217cbe2f6b1359e00c6c307bb3fc876ba74068cbab3dde77f03ca0dc4/ghp-import-2.1.0.tar.gz", hash = "sha256:9c535c4c61193c2df8871222567d7fd7e5014d835f97dc7b7439069e2413d343", size = 10943 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/f7/ec/67fbef5d497f86283db54c22eec6f6140243aae73265799baaaa19cd17fb/ghp_import-2.1.0-py3-none-any.whl", hash = "sha256:8337dd7b50877f163d4c0289bc1f1c7f127550241988d568c1db512c4324a619", size = 11034 }, -] - -[[package]] -name = "idna" -version = "3.10" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442 }, -] - -[[package]] -name = "jinja2" -version = "3.1.5" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "markupsafe" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/af/92/b3130cbbf5591acf9ade8708c365f3238046ac7cb8ccba6e81abccb0ccff/jinja2-3.1.5.tar.gz", hash = "sha256:8fefff8dc3034e27bb80d67c671eb8a9bc424c0ef4c0826edbff304cceff43bb", size = 244674 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/bd/0f/2ba5fbcd631e3e88689309dbe978c5769e883e4b84ebfe7da30b43275c5a/jinja2-3.1.5-py3-none-any.whl", hash = "sha256:aba0f4dc9ed8013c424088f68a5c226f7d6097ed89b246d7749c2ec4175c6adb", size = 134596 }, -] - -[[package]] -name = "markdown" -version = "3.7" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/54/28/3af612670f82f4c056911fbbbb42760255801b3068c48de792d354ff4472/markdown-3.7.tar.gz", hash = "sha256:2ae2471477cfd02dbbf038d5d9bc226d40def84b4fe2986e49b59b6b472bbed2", size = 357086 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/3f/08/83871f3c50fc983b88547c196d11cf8c3340e37c32d2e9d6152abe2c61f7/Markdown-3.7-py3-none-any.whl", hash = "sha256:7eb6df5690b81a1d7942992c97fad2938e956e79df20cbc6186e9c3a77b1c803", size = 106349 }, -] - -[[package]] -name = "markupsafe" -version = "3.0.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/b2/97/5d42485e71dfc078108a86d6de8fa46db44a1a9295e89c5d6d4a06e23a62/markupsafe-3.0.2.tar.gz", hash = "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0", size = 20537 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/6b/28/bbf83e3f76936960b850435576dd5e67034e200469571be53f69174a2dfd/MarkupSafe-3.0.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9025b4018f3a1314059769c7bf15441064b2207cb3f065e6ea1e7359cb46db9d", size = 14353 }, - { url = "https://files.pythonhosted.org/packages/6c/30/316d194b093cde57d448a4c3209f22e3046c5bb2fb0820b118292b334be7/MarkupSafe-3.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:93335ca3812df2f366e80509ae119189886b0f3c2b81325d39efdb84a1e2ae93", size = 12392 }, - { url = "https://files.pythonhosted.org/packages/f2/96/9cdafba8445d3a53cae530aaf83c38ec64c4d5427d975c974084af5bc5d2/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cb8438c3cbb25e220c2ab33bb226559e7afb3baec11c4f218ffa7308603c832", size = 23984 }, - { url = "https://files.pythonhosted.org/packages/f1/a4/aefb044a2cd8d7334c8a47d3fb2c9f328ac48cb349468cc31c20b539305f/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a123e330ef0853c6e822384873bef7507557d8e4a082961e1defa947aa59ba84", size = 23120 }, - { url = "https://files.pythonhosted.org/packages/8d/21/5e4851379f88f3fad1de30361db501300d4f07bcad047d3cb0449fc51f8c/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e084f686b92e5b83186b07e8a17fc09e38fff551f3602b249881fec658d3eca", size = 23032 }, - { url = "https://files.pythonhosted.org/packages/00/7b/e92c64e079b2d0d7ddf69899c98842f3f9a60a1ae72657c89ce2655c999d/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8213e09c917a951de9d09ecee036d5c7d36cb6cb7dbaece4c71a60d79fb9798", size = 24057 }, - { url = "https://files.pythonhosted.org/packages/f9/ac/46f960ca323037caa0a10662ef97d0a4728e890334fc156b9f9e52bcc4ca/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:5b02fb34468b6aaa40dfc198d813a641e3a63b98c2b05a16b9f80b7ec314185e", size = 23359 }, - { url = "https://files.pythonhosted.org/packages/69/84/83439e16197337b8b14b6a5b9c2105fff81d42c2a7c5b58ac7b62ee2c3b1/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:0bff5e0ae4ef2e1ae4fdf2dfd5b76c75e5c2fa4132d05fc1b0dabcd20c7e28c4", size = 23306 }, - { url = "https://files.pythonhosted.org/packages/9a/34/a15aa69f01e2181ed8d2b685c0d2f6655d5cca2c4db0ddea775e631918cd/MarkupSafe-3.0.2-cp311-cp311-win32.whl", hash = "sha256:6c89876f41da747c8d3677a2b540fb32ef5715f97b66eeb0c6b66f5e3ef6f59d", size = 15094 }, - { url = "https://files.pythonhosted.org/packages/da/b8/3a3bd761922d416f3dc5d00bfbed11f66b1ab89a0c2b6e887240a30b0f6b/MarkupSafe-3.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:70a87b411535ccad5ef2f1df5136506a10775d267e197e4cf531ced10537bd6b", size = 15521 }, - { url = "https://files.pythonhosted.org/packages/22/09/d1f21434c97fc42f09d290cbb6350d44eb12f09cc62c9476effdb33a18aa/MarkupSafe-3.0.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:9778bd8ab0a994ebf6f84c2b949e65736d5575320a17ae8984a77fab08db94cf", size = 14274 }, - { url = "https://files.pythonhosted.org/packages/6b/b0/18f76bba336fa5aecf79d45dcd6c806c280ec44538b3c13671d49099fdd0/MarkupSafe-3.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:846ade7b71e3536c4e56b386c2a47adf5741d2d8b94ec9dc3e92e5e1ee1e2225", size = 12348 }, - { url = "https://files.pythonhosted.org/packages/e0/25/dd5c0f6ac1311e9b40f4af06c78efde0f3b5cbf02502f8ef9501294c425b/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c99d261bd2d5f6b59325c92c73df481e05e57f19837bdca8413b9eac4bd8028", size = 24149 }, - { url = "https://files.pythonhosted.org/packages/f3/f0/89e7aadfb3749d0f52234a0c8c7867877876e0a20b60e2188e9850794c17/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e17c96c14e19278594aa4841ec148115f9c7615a47382ecb6b82bd8fea3ab0c8", size = 23118 }, - { url = "https://files.pythonhosted.org/packages/d5/da/f2eeb64c723f5e3777bc081da884b414671982008c47dcc1873d81f625b6/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88416bd1e65dcea10bc7569faacb2c20ce071dd1f87539ca2ab364bf6231393c", size = 22993 }, - { url = "https://files.pythonhosted.org/packages/da/0e/1f32af846df486dce7c227fe0f2398dc7e2e51d4a370508281f3c1c5cddc/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2181e67807fc2fa785d0592dc2d6206c019b9502410671cc905d132a92866557", size = 24178 }, - { url = "https://files.pythonhosted.org/packages/c4/f6/bb3ca0532de8086cbff5f06d137064c8410d10779c4c127e0e47d17c0b71/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:52305740fe773d09cffb16f8ed0427942901f00adedac82ec8b67752f58a1b22", size = 23319 }, - { url = "https://files.pythonhosted.org/packages/a2/82/8be4c96ffee03c5b4a034e60a31294daf481e12c7c43ab8e34a1453ee48b/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ad10d3ded218f1039f11a75f8091880239651b52e9bb592ca27de44eed242a48", size = 23352 }, - { url = "https://files.pythonhosted.org/packages/51/ae/97827349d3fcffee7e184bdf7f41cd6b88d9919c80f0263ba7acd1bbcb18/MarkupSafe-3.0.2-cp312-cp312-win32.whl", hash = "sha256:0f4ca02bea9a23221c0182836703cbf8930c5e9454bacce27e767509fa286a30", size = 15097 }, - { url = "https://files.pythonhosted.org/packages/c1/80/a61f99dc3a936413c3ee4e1eecac96c0da5ed07ad56fd975f1a9da5bc630/MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:8e06879fc22a25ca47312fbe7c8264eb0b662f6db27cb2d3bbbc74b1df4b9b87", size = 15601 }, - { url = "https://files.pythonhosted.org/packages/83/0e/67eb10a7ecc77a0c2bbe2b0235765b98d164d81600746914bebada795e97/MarkupSafe-3.0.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd", size = 14274 }, - { url = "https://files.pythonhosted.org/packages/2b/6d/9409f3684d3335375d04e5f05744dfe7e9f120062c9857df4ab490a1031a/MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430", size = 12352 }, - { url = "https://files.pythonhosted.org/packages/d2/f5/6eadfcd3885ea85fe2a7c128315cc1bb7241e1987443d78c8fe712d03091/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094", size = 24122 }, - { url = "https://files.pythonhosted.org/packages/0c/91/96cf928db8236f1bfab6ce15ad070dfdd02ed88261c2afafd4b43575e9e9/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396", size = 23085 }, - { url = "https://files.pythonhosted.org/packages/c2/cf/c9d56af24d56ea04daae7ac0940232d31d5a8354f2b457c6d856b2057d69/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79", size = 22978 }, - { url = "https://files.pythonhosted.org/packages/2a/9f/8619835cd6a711d6272d62abb78c033bda638fdc54c4e7f4272cf1c0962b/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a", size = 24208 }, - { url = "https://files.pythonhosted.org/packages/f9/bf/176950a1792b2cd2102b8ffeb5133e1ed984547b75db47c25a67d3359f77/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca", size = 23357 }, - { url = "https://files.pythonhosted.org/packages/ce/4f/9a02c1d335caabe5c4efb90e1b6e8ee944aa245c1aaaab8e8a618987d816/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c", size = 23344 }, - { url = "https://files.pythonhosted.org/packages/ee/55/c271b57db36f748f0e04a759ace9f8f759ccf22b4960c270c78a394f58be/MarkupSafe-3.0.2-cp313-cp313-win32.whl", hash = "sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1", size = 15101 }, - { url = "https://files.pythonhosted.org/packages/29/88/07df22d2dd4df40aba9f3e402e6dc1b8ee86297dddbad4872bd5e7b0094f/MarkupSafe-3.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f", size = 15603 }, - { url = "https://files.pythonhosted.org/packages/62/6a/8b89d24db2d32d433dffcd6a8779159da109842434f1dd2f6e71f32f738c/MarkupSafe-3.0.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c", size = 14510 }, - { url = "https://files.pythonhosted.org/packages/7a/06/a10f955f70a2e5a9bf78d11a161029d278eeacbd35ef806c3fd17b13060d/MarkupSafe-3.0.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb", size = 12486 }, - { url = "https://files.pythonhosted.org/packages/34/cf/65d4a571869a1a9078198ca28f39fba5fbb910f952f9dbc5220afff9f5e6/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c", size = 25480 }, - { url = "https://files.pythonhosted.org/packages/0c/e3/90e9651924c430b885468b56b3d597cabf6d72be4b24a0acd1fa0e12af67/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d", size = 23914 }, - { url = "https://files.pythonhosted.org/packages/66/8c/6c7cf61f95d63bb866db39085150df1f2a5bd3335298f14a66b48e92659c/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe", size = 23796 }, - { url = "https://files.pythonhosted.org/packages/bb/35/cbe9238ec3f47ac9a7c8b3df7a808e7cb50fe149dc7039f5f454b3fba218/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5", size = 25473 }, - { url = "https://files.pythonhosted.org/packages/e6/32/7621a4382488aa283cc05e8984a9c219abad3bca087be9ec77e89939ded9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a", size = 24114 }, - { url = "https://files.pythonhosted.org/packages/0d/80/0985960e4b89922cb5a0bac0ed39c5b96cbc1a536a99f30e8c220a996ed9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9", size = 24098 }, - { url = "https://files.pythonhosted.org/packages/82/78/fedb03c7d5380df2427038ec8d973587e90561b2d90cd472ce9254cf348b/MarkupSafe-3.0.2-cp313-cp313t-win32.whl", hash = "sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6", size = 15208 }, - { url = "https://files.pythonhosted.org/packages/4f/65/6079a46068dfceaeabb5dcad6d674f5f5c61a6fa5673746f42a9f4c233b3/MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", size = 15739 }, -] - -[[package]] -name = "mergedeep" -version = "1.3.4" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/3a/41/580bb4006e3ed0361b8151a01d324fb03f420815446c7def45d02f74c270/mergedeep-1.3.4.tar.gz", hash = "sha256:0096d52e9dad9939c3d975a774666af186eda617e6ca84df4c94dec30004f2a8", size = 4661 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/2c/19/04f9b178c2d8a15b076c8b5140708fa6ffc5601fb6f1e975537072df5b2a/mergedeep-1.3.4-py3-none-any.whl", hash = "sha256:70775750742b25c0d8f36c55aed03d24c3384d17c951b3175d898bd778ef0307", size = 6354 }, -] - -[[package]] -name = "mkdocs" -version = "1.6.1" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "click" }, - { name = "colorama", marker = "platform_system == 'Windows'" }, - { name = "ghp-import" }, - { name = "jinja2" }, - { name = "markdown" }, - { name = "markupsafe" }, - { name = "mergedeep" }, - { name = "mkdocs-get-deps" }, - { name = "packaging" }, - { name = "pathspec" }, - { name = "pyyaml" }, - { name = "pyyaml-env-tag" }, - { name = "watchdog" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/bc/c6/bbd4f061bd16b378247f12953ffcb04786a618ce5e904b8c5a01a0309061/mkdocs-1.6.1.tar.gz", hash = "sha256:7b432f01d928c084353ab39c57282f29f92136665bdd6abf7c1ec8d822ef86f2", size = 3889159 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/22/5b/dbc6a8cddc9cfa9c4971d59fb12bb8d42e161b7e7f8cc89e49137c5b279c/mkdocs-1.6.1-py3-none-any.whl", hash = "sha256:db91759624d1647f3f34aa0c3f327dd2601beae39a366d6e064c03468d35c20e", size = 3864451 }, -] - -[[package]] -name = "mkdocs-get-deps" -version = "0.2.0" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "mergedeep" }, - { name = "platformdirs" }, - { name = "pyyaml" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/98/f5/ed29cd50067784976f25ed0ed6fcd3c2ce9eb90650aa3b2796ddf7b6870b/mkdocs_get_deps-0.2.0.tar.gz", hash = "sha256:162b3d129c7fad9b19abfdcb9c1458a651628e4b1dea628ac68790fb3061c60c", size = 10239 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/9f/d4/029f984e8d3f3b6b726bd33cafc473b75e9e44c0f7e80a5b29abc466bdea/mkdocs_get_deps-0.2.0-py3-none-any.whl", hash = "sha256:2bf11d0b133e77a0dd036abeeb06dec8775e46efa526dc70667d8863eefc6134", size = 9521 }, -] - -[[package]] -name = "mkdocs-glightbox" -version = "0.4.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/86/5a/0bc456397ba0acc684b5b1daa4ca232ed717938fd37198251d8bcc4053bf/mkdocs-glightbox-0.4.0.tar.gz", hash = "sha256:392b34207bf95991071a16d5f8916d1d2f2cd5d5bb59ae2997485ccd778c70d9", size = 32010 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/c1/72/b0c2128bb569c732c11ae8e49a777089e77d83c05946062caa19b841e6fb/mkdocs_glightbox-0.4.0-py3-none-any.whl", hash = "sha256:e0107beee75d3eb7380ac06ea2d6eac94c999eaa49f8c3cbab0e7be2ac006ccf", size = 31154 }, -] - -[[package]] -name = "mkdocs-material" -version = "9.5.49" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "babel" }, - { name = "colorama" }, - { name = "jinja2" }, - { name = "markdown" }, - { name = "mkdocs" }, - { name = "mkdocs-material-extensions" }, - { name = "paginate" }, - { name = "pygments" }, - { name = "pymdown-extensions" }, - { name = "regex" }, - { name = "requests" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/e2/14/8daeeecee2e25bd84239a843fdcb92b20db88ebbcb26e0d32f414ca54a22/mkdocs_material-9.5.49.tar.gz", hash = "sha256:3671bb282b4f53a1c72e08adbe04d2481a98f85fed392530051f80ff94a9621d", size = 3949559 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/fc/2d/2dd23a36b48421db54f118bb6f6f733dbe2d5c78fe7867375e48649fd3df/mkdocs_material-9.5.49-py3-none-any.whl", hash = "sha256:c3c2d8176b18198435d3a3e119011922f3e11424074645c24019c2dcf08a360e", size = 8684098 }, -] - -[[package]] -name = "mkdocs-material-extensions" -version = "1.3.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/79/9b/9b4c96d6593b2a541e1cb8b34899a6d021d208bb357042823d4d2cabdbe7/mkdocs_material_extensions-1.3.1.tar.gz", hash = "sha256:10c9511cea88f568257f960358a467d12b970e1f7b2c0e5fb2bb48cab1928443", size = 11847 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/5b/54/662a4743aa81d9582ee9339d4ffa3c8fd40a4965e033d77b9da9774d3960/mkdocs_material_extensions-1.3.1-py3-none-any.whl", hash = "sha256:adff8b62700b25cb77b53358dad940f3ef973dd6db797907c49e3c2ef3ab4e31", size = 8728 }, -] - -[[package]] -name = "mkdocs-phospho" -version = "0.1.0" -source = { virtual = "." } -dependencies = [ - { name = "mkdocs" }, - { name = "mkdocs-glightbox" }, - { name = "mkdocs-material" }, -] - -[package.metadata] -requires-dist = [ - { name = "mkdocs", specifier = ">=1.6.1" }, - { name = "mkdocs-glightbox", specifier = ">=0.4.0" }, - { name = "mkdocs-material", specifier = ">=9.5.49" }, -] - -[[package]] -name = "packaging" -version = "24.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d0/63/68dbb6eb2de9cb10ee4c9c14a0148804425e13c4fb20d61cce69f53106da/packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f", size = 163950 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759", size = 65451 }, -] - -[[package]] -name = "paginate" -version = "0.5.7" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ec/46/68dde5b6bc00c1296ec6466ab27dddede6aec9af1b99090e1107091b3b84/paginate-0.5.7.tar.gz", hash = "sha256:22bd083ab41e1a8b4f3690544afb2c60c25e5c9a63a30fa2f483f6c60c8e5945", size = 19252 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/90/96/04b8e52da071d28f5e21a805b19cb9390aa17a47462ac87f5e2696b9566d/paginate-0.5.7-py2.py3-none-any.whl", hash = "sha256:b885e2af73abcf01d9559fd5216b57ef722f8c42affbb63942377668e35c7591", size = 13746 }, -] - -[[package]] -name = "pathspec" -version = "0.12.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ca/bc/f35b8446f4531a7cb215605d100cd88b7ac6f44ab3fc94870c120ab3adbf/pathspec-0.12.1.tar.gz", hash = "sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712", size = 51043 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/cc/20/ff623b09d963f88bfde16306a54e12ee5ea43e9b597108672ff3a408aad6/pathspec-0.12.1-py3-none-any.whl", hash = "sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08", size = 31191 }, -] - -[[package]] -name = "platformdirs" -version = "4.3.6" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/13/fc/128cc9cb8f03208bdbf93d3aa862e16d376844a14f9a0ce5cf4507372de4/platformdirs-4.3.6.tar.gz", hash = "sha256:357fb2acbc885b0419afd3ce3ed34564c13c9b95c89360cd9563f73aa5e2b907", size = 21302 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/3c/a6/bc1012356d8ece4d66dd75c4b9fc6c1f6650ddd5991e421177d9f8f671be/platformdirs-4.3.6-py3-none-any.whl", hash = "sha256:73e575e1408ab8103900836b97580d5307456908a03e92031bab39e4554cc3fb", size = 18439 }, -] - -[[package]] -name = "pygments" -version = "2.19.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/7c/2d/c3338d48ea6cc0feb8446d8e6937e1408088a72a39937982cc6111d17f84/pygments-2.19.1.tar.gz", hash = "sha256:61c16d2a8576dc0649d9f39e089b5f02bcd27fba10d8fb4dcc28173f7a45151f", size = 4968581 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/8a/0b/9fcc47d19c48b59121088dd6da2488a49d5f72dacf8262e2790a1d2c7d15/pygments-2.19.1-py3-none-any.whl", hash = "sha256:9ea1544ad55cecf4b8242fab6dd35a93bbce657034b0611ee383099054ab6d8c", size = 1225293 }, -] - -[[package]] -name = "pymdown-extensions" -version = "10.13" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "markdown" }, - { name = "pyyaml" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/49/87/4998d1aac5afea5b081238a609d9814f4c33cd5c7123503276d1105fb6a9/pymdown_extensions-10.13.tar.gz", hash = "sha256:e0b351494dc0d8d14a1f52b39b1499a00ef1566b4ba23dc74f1eba75c736f5dd", size = 843302 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/86/7f/46c7122186759350cf523c71d29712be534f769f073a1d980ce8f095072c/pymdown_extensions-10.13-py3-none-any.whl", hash = "sha256:80bc33d715eec68e683e04298946d47d78c7739e79d808203df278ee8ef89428", size = 264108 }, -] - -[[package]] -name = "python-dateutil" -version = "2.9.0.post0" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "six" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/66/c0/0c8b6ad9f17a802ee498c46e004a0eb49bc148f2fd230864601a86dcf6db/python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3", size = 342432 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892 }, -] - -[[package]] -name = "pyyaml" -version = "6.0.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/54/ed/79a089b6be93607fa5cdaedf301d7dfb23af5f25c398d5ead2525b063e17/pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e", size = 130631 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/f8/aa/7af4e81f7acba21a4c6be026da38fd2b872ca46226673c89a758ebdc4fd2/PyYAML-6.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774", size = 184612 }, - { url = "https://files.pythonhosted.org/packages/8b/62/b9faa998fd185f65c1371643678e4d58254add437edb764a08c5a98fb986/PyYAML-6.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee", size = 172040 }, - { url = "https://files.pythonhosted.org/packages/ad/0c/c804f5f922a9a6563bab712d8dcc70251e8af811fce4524d57c2c0fd49a4/PyYAML-6.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c", size = 736829 }, - { url = "https://files.pythonhosted.org/packages/51/16/6af8d6a6b210c8e54f1406a6b9481febf9c64a3109c541567e35a49aa2e7/PyYAML-6.0.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317", size = 764167 }, - { url = "https://files.pythonhosted.org/packages/75/e4/2c27590dfc9992f73aabbeb9241ae20220bd9452df27483b6e56d3975cc5/PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85", size = 762952 }, - { url = "https://files.pythonhosted.org/packages/9b/97/ecc1abf4a823f5ac61941a9c00fe501b02ac3ab0e373c3857f7d4b83e2b6/PyYAML-6.0.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4", size = 735301 }, - { url = "https://files.pythonhosted.org/packages/45/73/0f49dacd6e82c9430e46f4a027baa4ca205e8b0a9dce1397f44edc23559d/PyYAML-6.0.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e", size = 756638 }, - { url = "https://files.pythonhosted.org/packages/22/5f/956f0f9fc65223a58fbc14459bf34b4cc48dec52e00535c79b8db361aabd/PyYAML-6.0.2-cp311-cp311-win32.whl", hash = "sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5", size = 143850 }, - { url = "https://files.pythonhosted.org/packages/ed/23/8da0bbe2ab9dcdd11f4f4557ccaf95c10b9811b13ecced089d43ce59c3c8/PyYAML-6.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44", size = 161980 }, - { url = "https://files.pythonhosted.org/packages/86/0c/c581167fc46d6d6d7ddcfb8c843a4de25bdd27e4466938109ca68492292c/PyYAML-6.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab", size = 183873 }, - { url = "https://files.pythonhosted.org/packages/a8/0c/38374f5bb272c051e2a69281d71cba6fdb983413e6758b84482905e29a5d/PyYAML-6.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725", size = 173302 }, - { url = "https://files.pythonhosted.org/packages/c3/93/9916574aa8c00aa06bbac729972eb1071d002b8e158bd0e83a3b9a20a1f7/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5", size = 739154 }, - { url = "https://files.pythonhosted.org/packages/95/0f/b8938f1cbd09739c6da569d172531567dbcc9789e0029aa070856f123984/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425", size = 766223 }, - { url = "https://files.pythonhosted.org/packages/b9/2b/614b4752f2e127db5cc206abc23a8c19678e92b23c3db30fc86ab731d3bd/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476", size = 767542 }, - { url = "https://files.pythonhosted.org/packages/d4/00/dd137d5bcc7efea1836d6264f049359861cf548469d18da90cd8216cf05f/PyYAML-6.0.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48", size = 731164 }, - { url = "https://files.pythonhosted.org/packages/c9/1f/4f998c900485e5c0ef43838363ba4a9723ac0ad73a9dc42068b12aaba4e4/PyYAML-6.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b", size = 756611 }, - { url = "https://files.pythonhosted.org/packages/df/d1/f5a275fdb252768b7a11ec63585bc38d0e87c9e05668a139fea92b80634c/PyYAML-6.0.2-cp312-cp312-win32.whl", hash = "sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4", size = 140591 }, - { url = "https://files.pythonhosted.org/packages/0c/e8/4f648c598b17c3d06e8753d7d13d57542b30d56e6c2dedf9c331ae56312e/PyYAML-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8", size = 156338 }, - { url = "https://files.pythonhosted.org/packages/ef/e3/3af305b830494fa85d95f6d95ef7fa73f2ee1cc8ef5b495c7c3269fb835f/PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba", size = 181309 }, - { url = "https://files.pythonhosted.org/packages/45/9f/3b1c20a0b7a3200524eb0076cc027a970d320bd3a6592873c85c92a08731/PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1", size = 171679 }, - { url = "https://files.pythonhosted.org/packages/7c/9a/337322f27005c33bcb656c655fa78325b730324c78620e8328ae28b64d0c/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133", size = 733428 }, - { url = "https://files.pythonhosted.org/packages/a3/69/864fbe19e6c18ea3cc196cbe5d392175b4cf3d5d0ac1403ec3f2d237ebb5/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484", size = 763361 }, - { url = "https://files.pythonhosted.org/packages/04/24/b7721e4845c2f162d26f50521b825fb061bc0a5afcf9a386840f23ea19fa/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5", size = 759523 }, - { url = "https://files.pythonhosted.org/packages/2b/b2/e3234f59ba06559c6ff63c4e10baea10e5e7df868092bf9ab40e5b9c56b6/PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc", size = 726660 }, - { url = "https://files.pythonhosted.org/packages/fe/0f/25911a9f080464c59fab9027482f822b86bf0608957a5fcc6eaac85aa515/PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652", size = 751597 }, - { url = "https://files.pythonhosted.org/packages/14/0d/e2c3b43bbce3cf6bd97c840b46088a3031085179e596d4929729d8d68270/PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183", size = 140527 }, - { url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446 }, -] - -[[package]] -name = "pyyaml-env-tag" -version = "0.1" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "pyyaml" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/fb/8e/da1c6c58f751b70f8ceb1eb25bc25d524e8f14fe16edcce3f4e3ba08629c/pyyaml_env_tag-0.1.tar.gz", hash = "sha256:70092675bda14fdec33b31ba77e7543de9ddc88f2e5b99160396572d11525bdb", size = 5631 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/5a/66/bbb1dd374f5c870f59c5bb1db0e18cbe7fa739415a24cbd95b2d1f5ae0c4/pyyaml_env_tag-0.1-py3-none-any.whl", hash = "sha256:af31106dec8a4d68c60207c1886031cbf839b68aa7abccdb19868200532c2069", size = 3911 }, -] - -[[package]] -name = "regex" -version = "2024.11.6" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/8e/5f/bd69653fbfb76cf8604468d3b4ec4c403197144c7bfe0e6a5fc9e02a07cb/regex-2024.11.6.tar.gz", hash = "sha256:7ab159b063c52a0333c884e4679f8d7a85112ee3078fe3d9004b2dd875585519", size = 399494 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/58/58/7e4d9493a66c88a7da6d205768119f51af0f684fe7be7bac8328e217a52c/regex-2024.11.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:5478c6962ad548b54a591778e93cd7c456a7a29f8eca9c49e4f9a806dcc5d638", size = 482669 }, - { url = "https://files.pythonhosted.org/packages/34/4c/8f8e631fcdc2ff978609eaeef1d6994bf2f028b59d9ac67640ed051f1218/regex-2024.11.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2c89a8cc122b25ce6945f0423dc1352cb9593c68abd19223eebbd4e56612c5b7", size = 287684 }, - { url = "https://files.pythonhosted.org/packages/c5/1b/f0e4d13e6adf866ce9b069e191f303a30ab1277e037037a365c3aad5cc9c/regex-2024.11.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:94d87b689cdd831934fa3ce16cc15cd65748e6d689f5d2b8f4f4df2065c9fa20", size = 284589 }, - { url = "https://files.pythonhosted.org/packages/25/4d/ab21047f446693887f25510887e6820b93f791992994f6498b0318904d4a/regex-2024.11.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1062b39a0a2b75a9c694f7a08e7183a80c63c0d62b301418ffd9c35f55aaa114", size = 792121 }, - { url = "https://files.pythonhosted.org/packages/45/ee/c867e15cd894985cb32b731d89576c41a4642a57850c162490ea34b78c3b/regex-2024.11.6-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:167ed4852351d8a750da48712c3930b031f6efdaa0f22fa1933716bfcd6bf4a3", size = 831275 }, - { url = "https://files.pythonhosted.org/packages/b3/12/b0f480726cf1c60f6536fa5e1c95275a77624f3ac8fdccf79e6727499e28/regex-2024.11.6-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2d548dafee61f06ebdb584080621f3e0c23fff312f0de1afc776e2a2ba99a74f", size = 818257 }, - { url = "https://files.pythonhosted.org/packages/bf/ce/0d0e61429f603bac433910d99ef1a02ce45a8967ffbe3cbee48599e62d88/regex-2024.11.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a19f302cd1ce5dd01a9099aaa19cae6173306d1302a43b627f62e21cf18ac0", size = 792727 }, - { url = "https://files.pythonhosted.org/packages/e4/c1/243c83c53d4a419c1556f43777ccb552bccdf79d08fda3980e4e77dd9137/regex-2024.11.6-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bec9931dfb61ddd8ef2ebc05646293812cb6b16b60cf7c9511a832b6f1854b55", size = 780667 }, - { url = "https://files.pythonhosted.org/packages/c5/f4/75eb0dd4ce4b37f04928987f1d22547ddaf6c4bae697623c1b05da67a8aa/regex-2024.11.6-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:9714398225f299aa85267fd222f7142fcb5c769e73d7733344efc46f2ef5cf89", size = 776963 }, - { url = "https://files.pythonhosted.org/packages/16/5d/95c568574e630e141a69ff8a254c2f188b4398e813c40d49228c9bbd9875/regex-2024.11.6-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:202eb32e89f60fc147a41e55cb086db2a3f8cb82f9a9a88440dcfc5d37faae8d", size = 784700 }, - { url = "https://files.pythonhosted.org/packages/8e/b5/f8495c7917f15cc6fee1e7f395e324ec3e00ab3c665a7dc9d27562fd5290/regex-2024.11.6-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:4181b814e56078e9b00427ca358ec44333765f5ca1b45597ec7446d3a1ef6e34", size = 848592 }, - { url = "https://files.pythonhosted.org/packages/1c/80/6dd7118e8cb212c3c60b191b932dc57db93fb2e36fb9e0e92f72a5909af9/regex-2024.11.6-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:068376da5a7e4da51968ce4c122a7cd31afaaec4fccc7856c92f63876e57b51d", size = 852929 }, - { url = "https://files.pythonhosted.org/packages/11/9b/5a05d2040297d2d254baf95eeeb6df83554e5e1df03bc1a6687fc4ba1f66/regex-2024.11.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ac10f2c4184420d881a3475fb2c6f4d95d53a8d50209a2500723d831036f7c45", size = 781213 }, - { url = "https://files.pythonhosted.org/packages/26/b7/b14e2440156ab39e0177506c08c18accaf2b8932e39fb092074de733d868/regex-2024.11.6-cp311-cp311-win32.whl", hash = "sha256:c36f9b6f5f8649bb251a5f3f66564438977b7ef8386a52460ae77e6070d309d9", size = 261734 }, - { url = "https://files.pythonhosted.org/packages/80/32/763a6cc01d21fb3819227a1cc3f60fd251c13c37c27a73b8ff4315433a8e/regex-2024.11.6-cp311-cp311-win_amd64.whl", hash = "sha256:02e28184be537f0e75c1f9b2f8847dc51e08e6e171c6bde130b2687e0c33cf60", size = 274052 }, - { url = "https://files.pythonhosted.org/packages/ba/30/9a87ce8336b172cc232a0db89a3af97929d06c11ceaa19d97d84fa90a8f8/regex-2024.11.6-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:52fb28f528778f184f870b7cf8f225f5eef0a8f6e3778529bdd40c7b3920796a", size = 483781 }, - { url = "https://files.pythonhosted.org/packages/01/e8/00008ad4ff4be8b1844786ba6636035f7ef926db5686e4c0f98093612add/regex-2024.11.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fdd6028445d2460f33136c55eeb1f601ab06d74cb3347132e1c24250187500d9", size = 288455 }, - { url = "https://files.pythonhosted.org/packages/60/85/cebcc0aff603ea0a201667b203f13ba75d9fc8668fab917ac5b2de3967bc/regex-2024.11.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:805e6b60c54bf766b251e94526ebad60b7de0c70f70a4e6210ee2891acb70bf2", size = 284759 }, - { url = "https://files.pythonhosted.org/packages/94/2b/701a4b0585cb05472a4da28ee28fdfe155f3638f5e1ec92306d924e5faf0/regex-2024.11.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b85c2530be953a890eaffde05485238f07029600e8f098cdf1848d414a8b45e4", size = 794976 }, - { url = "https://files.pythonhosted.org/packages/4b/bf/fa87e563bf5fee75db8915f7352e1887b1249126a1be4813837f5dbec965/regex-2024.11.6-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bb26437975da7dc36b7efad18aa9dd4ea569d2357ae6b783bf1118dabd9ea577", size = 833077 }, - { url = "https://files.pythonhosted.org/packages/a1/56/7295e6bad94b047f4d0834e4779491b81216583c00c288252ef625c01d23/regex-2024.11.6-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:abfa5080c374a76a251ba60683242bc17eeb2c9818d0d30117b4486be10c59d3", size = 823160 }, - { url = "https://files.pythonhosted.org/packages/fb/13/e3b075031a738c9598c51cfbc4c7879e26729c53aa9cca59211c44235314/regex-2024.11.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b7fa6606c2881c1db9479b0eaa11ed5dfa11c8d60a474ff0e095099f39d98e", size = 796896 }, - { url = "https://files.pythonhosted.org/packages/24/56/0b3f1b66d592be6efec23a795b37732682520b47c53da5a32c33ed7d84e3/regex-2024.11.6-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0c32f75920cf99fe6b6c539c399a4a128452eaf1af27f39bce8909c9a3fd8cbe", size = 783997 }, - { url = "https://files.pythonhosted.org/packages/f9/a1/eb378dada8b91c0e4c5f08ffb56f25fcae47bf52ad18f9b2f33b83e6d498/regex-2024.11.6-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:982e6d21414e78e1f51cf595d7f321dcd14de1f2881c5dc6a6e23bbbbd68435e", size = 781725 }, - { url = "https://files.pythonhosted.org/packages/83/f2/033e7dec0cfd6dda93390089864732a3409246ffe8b042e9554afa9bff4e/regex-2024.11.6-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:a7c2155f790e2fb448faed6dd241386719802296ec588a8b9051c1f5c481bc29", size = 789481 }, - { url = "https://files.pythonhosted.org/packages/83/23/15d4552ea28990a74e7696780c438aadd73a20318c47e527b47a4a5a596d/regex-2024.11.6-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:149f5008d286636e48cd0b1dd65018548944e495b0265b45e1bffecce1ef7f39", size = 852896 }, - { url = "https://files.pythonhosted.org/packages/e3/39/ed4416bc90deedbfdada2568b2cb0bc1fdb98efe11f5378d9892b2a88f8f/regex-2024.11.6-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:e5364a4502efca094731680e80009632ad6624084aff9a23ce8c8c6820de3e51", size = 860138 }, - { url = "https://files.pythonhosted.org/packages/93/2d/dd56bb76bd8e95bbce684326302f287455b56242a4f9c61f1bc76e28360e/regex-2024.11.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:0a86e7eeca091c09e021db8eb72d54751e527fa47b8d5787caf96d9831bd02ad", size = 787692 }, - { url = "https://files.pythonhosted.org/packages/0b/55/31877a249ab7a5156758246b9c59539abbeba22461b7d8adc9e8475ff73e/regex-2024.11.6-cp312-cp312-win32.whl", hash = "sha256:32f9a4c643baad4efa81d549c2aadefaeba12249b2adc5af541759237eee1c54", size = 262135 }, - { url = "https://files.pythonhosted.org/packages/38/ec/ad2d7de49a600cdb8dd78434a1aeffe28b9d6fc42eb36afab4a27ad23384/regex-2024.11.6-cp312-cp312-win_amd64.whl", hash = "sha256:a93c194e2df18f7d264092dc8539b8ffb86b45b899ab976aa15d48214138e81b", size = 273567 }, - { url = "https://files.pythonhosted.org/packages/90/73/bcb0e36614601016552fa9344544a3a2ae1809dc1401b100eab02e772e1f/regex-2024.11.6-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a6ba92c0bcdf96cbf43a12c717eae4bc98325ca3730f6b130ffa2e3c3c723d84", size = 483525 }, - { url = "https://files.pythonhosted.org/packages/0f/3f/f1a082a46b31e25291d830b369b6b0c5576a6f7fb89d3053a354c24b8a83/regex-2024.11.6-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:525eab0b789891ac3be914d36893bdf972d483fe66551f79d3e27146191a37d4", size = 288324 }, - { url = "https://files.pythonhosted.org/packages/09/c9/4e68181a4a652fb3ef5099e077faf4fd2a694ea6e0f806a7737aff9e758a/regex-2024.11.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:086a27a0b4ca227941700e0b31425e7a28ef1ae8e5e05a33826e17e47fbfdba0", size = 284617 }, - { url = "https://files.pythonhosted.org/packages/fc/fd/37868b75eaf63843165f1d2122ca6cb94bfc0271e4428cf58c0616786dce/regex-2024.11.6-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bde01f35767c4a7899b7eb6e823b125a64de314a8ee9791367c9a34d56af18d0", size = 795023 }, - { url = "https://files.pythonhosted.org/packages/c4/7c/d4cd9c528502a3dedb5c13c146e7a7a539a3853dc20209c8e75d9ba9d1b2/regex-2024.11.6-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b583904576650166b3d920d2bcce13971f6f9e9a396c673187f49811b2769dc7", size = 833072 }, - { url = "https://files.pythonhosted.org/packages/4f/db/46f563a08f969159c5a0f0e722260568425363bea43bb7ae370becb66a67/regex-2024.11.6-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1c4de13f06a0d54fa0d5ab1b7138bfa0d883220965a29616e3ea61b35d5f5fc7", size = 823130 }, - { url = "https://files.pythonhosted.org/packages/db/60/1eeca2074f5b87df394fccaa432ae3fc06c9c9bfa97c5051aed70e6e00c2/regex-2024.11.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3cde6e9f2580eb1665965ce9bf17ff4952f34f5b126beb509fee8f4e994f143c", size = 796857 }, - { url = "https://files.pythonhosted.org/packages/10/db/ac718a08fcee981554d2f7bb8402f1faa7e868c1345c16ab1ebec54b0d7b/regex-2024.11.6-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0d7f453dca13f40a02b79636a339c5b62b670141e63efd511d3f8f73fba162b3", size = 784006 }, - { url = "https://files.pythonhosted.org/packages/c2/41/7da3fe70216cea93144bf12da2b87367590bcf07db97604edeea55dac9ad/regex-2024.11.6-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:59dfe1ed21aea057a65c6b586afd2a945de04fc7db3de0a6e3ed5397ad491b07", size = 781650 }, - { url = "https://files.pythonhosted.org/packages/a7/d5/880921ee4eec393a4752e6ab9f0fe28009435417c3102fc413f3fe81c4e5/regex-2024.11.6-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b97c1e0bd37c5cd7902e65f410779d39eeda155800b65fc4d04cc432efa9bc6e", size = 789545 }, - { url = "https://files.pythonhosted.org/packages/dc/96/53770115e507081122beca8899ab7f5ae28ae790bfcc82b5e38976df6a77/regex-2024.11.6-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:f9d1e379028e0fc2ae3654bac3cbbef81bf3fd571272a42d56c24007979bafb6", size = 853045 }, - { url = "https://files.pythonhosted.org/packages/31/d3/1372add5251cc2d44b451bd94f43b2ec78e15a6e82bff6a290ef9fd8f00a/regex-2024.11.6-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:13291b39131e2d002a7940fb176e120bec5145f3aeb7621be6534e46251912c4", size = 860182 }, - { url = "https://files.pythonhosted.org/packages/ed/e3/c446a64984ea9f69982ba1a69d4658d5014bc7a0ea468a07e1a1265db6e2/regex-2024.11.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4f51f88c126370dcec4908576c5a627220da6c09d0bff31cfa89f2523843316d", size = 787733 }, - { url = "https://files.pythonhosted.org/packages/2b/f1/e40c8373e3480e4f29f2692bd21b3e05f296d3afebc7e5dcf21b9756ca1c/regex-2024.11.6-cp313-cp313-win32.whl", hash = "sha256:63b13cfd72e9601125027202cad74995ab26921d8cd935c25f09c630436348ff", size = 262122 }, - { url = "https://files.pythonhosted.org/packages/45/94/bc295babb3062a731f52621cdc992d123111282e291abaf23faa413443ea/regex-2024.11.6-cp313-cp313-win_amd64.whl", hash = "sha256:2b3361af3198667e99927da8b84c1b010752fa4b1115ee30beaa332cabc3ef1a", size = 273545 }, -] - -[[package]] -name = "requests" -version = "2.32.3" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "certifi" }, - { name = "charset-normalizer" }, - { name = "idna" }, - { name = "urllib3" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/63/70/2bf7780ad2d390a8d301ad0b550f1581eadbd9a20f896afe06353c2a2913/requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760", size = 131218 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6", size = 64928 }, -] - -[[package]] -name = "six" -version = "1.17.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050 }, -] - -[[package]] -name = "urllib3" -version = "2.3.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/aa/63/e53da845320b757bf29ef6a9062f5c669fe997973f966045cb019c3f4b66/urllib3-2.3.0.tar.gz", hash = "sha256:f8c5449b3cf0861679ce7e0503c7b44b5ec981bec0d1d3795a07f1ba96f0204d", size = 307268 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df", size = 128369 }, -] - -[[package]] -name = "watchdog" -version = "6.0.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/db/7d/7f3d619e951c88ed75c6037b246ddcf2d322812ee8ea189be89511721d54/watchdog-6.0.0.tar.gz", hash = "sha256:9ddf7c82fda3ae8e24decda1338ede66e1c99883db93711d8fb941eaa2d8c282", size = 131220 } -wheels = [ - { url = "https://files.pythonhosted.org/packages/e0/24/d9be5cd6642a6aa68352ded4b4b10fb0d7889cb7f45814fb92cecd35f101/watchdog-6.0.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6eb11feb5a0d452ee41f824e271ca311a09e250441c262ca2fd7ebcf2461a06c", size = 96393 }, - { url = "https://files.pythonhosted.org/packages/63/7a/6013b0d8dbc56adca7fdd4f0beed381c59f6752341b12fa0886fa7afc78b/watchdog-6.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ef810fbf7b781a5a593894e4f439773830bdecb885e6880d957d5b9382a960d2", size = 88392 }, - { url = "https://files.pythonhosted.org/packages/d1/40/b75381494851556de56281e053700e46bff5b37bf4c7267e858640af5a7f/watchdog-6.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:afd0fe1b2270917c5e23c2a65ce50c2a4abb63daafb0d419fde368e272a76b7c", size = 89019 }, - { url = "https://files.pythonhosted.org/packages/39/ea/3930d07dafc9e286ed356a679aa02d777c06e9bfd1164fa7c19c288a5483/watchdog-6.0.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:bdd4e6f14b8b18c334febb9c4425a878a2ac20efd1e0b231978e7b150f92a948", size = 96471 }, - { url = "https://files.pythonhosted.org/packages/12/87/48361531f70b1f87928b045df868a9fd4e253d9ae087fa4cf3f7113be363/watchdog-6.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c7c15dda13c4eb00d6fb6fc508b3c0ed88b9d5d374056b239c4ad1611125c860", size = 88449 }, - { url = "https://files.pythonhosted.org/packages/5b/7e/8f322f5e600812e6f9a31b75d242631068ca8f4ef0582dd3ae6e72daecc8/watchdog-6.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6f10cb2d5902447c7d0da897e2c6768bca89174d0c6e1e30abec5421af97a5b0", size = 89054 }, - { url = "https://files.pythonhosted.org/packages/68/98/b0345cabdce2041a01293ba483333582891a3bd5769b08eceb0d406056ef/watchdog-6.0.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:490ab2ef84f11129844c23fb14ecf30ef3d8a6abafd3754a6f75ca1e6654136c", size = 96480 }, - { url = "https://files.pythonhosted.org/packages/85/83/cdf13902c626b28eedef7ec4f10745c52aad8a8fe7eb04ed7b1f111ca20e/watchdog-6.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:76aae96b00ae814b181bb25b1b98076d5fc84e8a53cd8885a318b42b6d3a5134", size = 88451 }, - { url = "https://files.pythonhosted.org/packages/fe/c4/225c87bae08c8b9ec99030cd48ae9c4eca050a59bf5c2255853e18c87b50/watchdog-6.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a175f755fc2279e0b7312c0035d52e27211a5bc39719dd529625b1930917345b", size = 89057 }, - { url = "https://files.pythonhosted.org/packages/a9/c7/ca4bf3e518cb57a686b2feb4f55a1892fd9a3dd13f470fca14e00f80ea36/watchdog-6.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7607498efa04a3542ae3e05e64da8202e58159aa1fa4acddf7678d34a35d4f13", size = 79079 }, - { url = "https://files.pythonhosted.org/packages/5c/51/d46dc9332f9a647593c947b4b88e2381c8dfc0942d15b8edc0310fa4abb1/watchdog-6.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:9041567ee8953024c83343288ccc458fd0a2d811d6a0fd68c4c22609e3490379", size = 79078 }, - { url = "https://files.pythonhosted.org/packages/d4/57/04edbf5e169cd318d5f07b4766fee38e825d64b6913ca157ca32d1a42267/watchdog-6.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:82dc3e3143c7e38ec49d61af98d6558288c415eac98486a5c581726e0737c00e", size = 79076 }, - { url = "https://files.pythonhosted.org/packages/ab/cc/da8422b300e13cb187d2203f20b9253e91058aaf7db65b74142013478e66/watchdog-6.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:212ac9b8bf1161dc91bd09c048048a95ca3a4c4f5e5d4a7d1b1a7d5752a7f96f", size = 79077 }, - { url = "https://files.pythonhosted.org/packages/2c/3b/b8964e04ae1a025c44ba8e4291f86e97fac443bca31de8bd98d3263d2fcf/watchdog-6.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:e3df4cbb9a450c6d49318f6d14f4bbc80d763fa587ba46ec86f99f9e6876bb26", size = 79078 }, - { url = "https://files.pythonhosted.org/packages/62/ae/a696eb424bedff7407801c257d4b1afda455fe40821a2be430e173660e81/watchdog-6.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:2cce7cfc2008eb51feb6aab51251fd79b85d9894e98ba847408f662b3395ca3c", size = 79077 }, - { url = "https://files.pythonhosted.org/packages/b5/e8/dbf020b4d98251a9860752a094d09a65e1b436ad181faf929983f697048f/watchdog-6.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:20ffe5b202af80ab4266dcd3e91aae72bf2da48c0d33bdb15c66658e685e94e2", size = 79078 }, - { url = "https://files.pythonhosted.org/packages/07/f6/d0e5b343768e8bcb4cda79f0f2f55051bf26177ecd5651f84c07567461cf/watchdog-6.0.0-py3-none-win32.whl", hash = "sha256:07df1fdd701c5d4c8e55ef6cf55b8f0120fe1aef7ef39a1c6fc6bc2e606d517a", size = 79065 }, - { url = "https://files.pythonhosted.org/packages/db/d9/c495884c6e548fce18a8f40568ff120bc3a4b7b99813081c8ac0c936fa64/watchdog-6.0.0-py3-none-win_amd64.whl", hash = "sha256:cbafb470cf848d93b5d013e2ecb245d4aa1c8fd0504e863ccefa32445359d680", size = 79070 }, - { url = "https://files.pythonhosted.org/packages/33/e8/e40370e6d74ddba47f002a32919d91310d6074130fe4e17dabcafc15cbf1/watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f", size = 79067 }, -] diff --git a/self-hosting/index.html b/self-hosting/index.html new file mode 100644 index 0000000..6fd345f --- /dev/null +++ b/self-hosting/index.html @@ -0,0 +1,2451 @@ + + + + + + + + + + + + + + + + + + + + + + + + Self-hosting - phospho platform docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + +

Self-hosting

+ +

The phospho platform can be hosted on your own infrastructure. The code is open source and available here.

+

This is useful if you want to keep your data private or if you have specific data compliance requirements.

+

How to deploy phospho with Docker?

+

The platform can be deployed using Docker. Start by cloning the phospho repository.

+
git clone https://github.com/phospho-app/phospho.git
+
+

Once the environment variables are set up, you can then use Docker compose to quickly build and deploy the platform.

+
docker compose up
+
+

Please follow this guide for the complete instructions on how to setup environment variables.

+

How to deploy phospho on the Cloud?

+

phospho is compatible with any cloud provider thanks to its container-based architecture.

+
    +
  • Google Cloud platform (feel free to refer to the deployment scripts here)
  • +
  • Microsoft Azure
  • +
  • Amazon Web Services
  • +
+

To get started easily, we recommend you use Porter.run.

+

Contact us

+

To get help, feel free to reach out at contact@phospho.ai

+ + + + + + + + + + + + + + + + +
+
+ + + + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 0000000..5b8dcc5 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,199 @@ + + + + https://phospho-app.github.io/docs/ + 2025-09-24 + + + https://phospho-app.github.io/docs/cli/ + 2025-09-24 + + + https://phospho-app.github.io/docs/getting-started/ + 2025-09-24 + + + https://phospho-app.github.io/docs/self-hosting/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/ab-test/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/clustering/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/evaluation/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/events/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/fine-tuning/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/language/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/sentiment-analysis/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/sessions-and-users/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/tagging/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/usage-based-billing/ + 2025-09-24 + + + https://phospho-app.github.io/docs/analytics/user-feedback/ + 2025-09-24 + + + https://phospho-app.github.io/docs/api-reference/introduction/ + 2025-09-24 + + + https://phospho-app.github.io/docs/examples/introduction/ + 2025-09-24 + + + https://phospho-app.github.io/docs/guides/LLM-judge/ + 2025-09-24 + + + https://phospho-app.github.io/docs/guides/export-dataset-argilla/ + 2025-09-24 + + + https://phospho-app.github.io/docs/guides/getting-started/ + 2025-09-24 + + + https://phospho-app.github.io/docs/guides/understand-your-data/ + 2025-09-24 + + + https://phospho-app.github.io/docs/guides/user-intent/ + 2025-09-24 + + + https://phospho-app.github.io/docs/guides/welcome-guide/ + 2025-09-24 + + + https://phospho-app.github.io/docs/import-data/api-integration/ + 2025-09-24 + + + https://phospho-app.github.io/docs/import-data/import-file/ + 2025-09-24 + + + https://phospho-app.github.io/docs/import-data/import-langfuse/ + 2025-09-24 + + + https://phospho-app.github.io/docs/import-data/import-langsmith/ + 2025-09-24 + + + https://phospho-app.github.io/docs/import-data/tracing/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/argilla/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/langchain/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/postgresql/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/powerbi/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/supabase/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/javascript/logging/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/python/analytics/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/python/logging/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/python/reference/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/python/testing/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/python/examples/openai-agent/ + 2025-09-24 + + + https://phospho-app.github.io/docs/integrations/python/examples/openai-streamlit/ + 2025-09-24 + + + https://phospho-app.github.io/docs/local/custom-job/ + 2025-09-24 + + + https://phospho-app.github.io/docs/local/llm-provider/ + 2025-09-24 + + + https://phospho-app.github.io/docs/local/optimize/ + 2025-09-24 + + + https://phospho-app.github.io/docs/local/quickstart/ + 2025-09-24 + + + https://phospho-app.github.io/docs/models/classify/ + 2025-09-24 + + + https://phospho-app.github.io/docs/models/embeddings/ + 2025-09-24 + + + https://phospho-app.github.io/docs/models/llm/ + 2025-09-24 + + + https://phospho-app.github.io/docs/models/multimodal/ + 2025-09-24 + + + https://phospho-app.github.io/docs/models/tak/ + 2025-09-24 + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 0000000..ad42901 Binary files /dev/null and b/sitemap.xml.gz differ

Make sure you have imported your data before starting this guide.