Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions docs/docs/Components/bundles-burncloud.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: BurnCloud
slug: /bundles-burncloud
description: Use BurnCloud's OpenAI-compatible models inside Langflow.
---

import Icon from "@site/src/components/icon";

<Icon name="Blocks" aria-hidden="true" /> [**Bundles**](/components-bundle-components) contain custom components that support specific third-party integrations with Langflow.

This page describes the components that are available in the **BurnCloud** bundle.

For more information about BurnCloud features and API limits, see the [BurnCloud documentation](https://burncloud.com/).

## BurnCloud text generation

The **BurnCloud** component generates text through BurnCloud's OpenAI-compatible API gateway. It works with the same chat-completions schema as OpenAI, while letting you point to BurnCloud-hosted models or a private BurnCloud deployment.

It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). The **Language Model** output is an instance of [`ChatOpenAI`](https://python.langchain.com/docs/integrations/chat/openai) configured to target BurnCloud's `/v1` endpoints.

Use the **Language Model** output when you want to pass a BurnCloud model into another LLM-driven component, such as **Agent**, **Smart Function**, or **Prompt Template** components.

### BurnCloud parameters

import PartialParams from '@site/docs/_partial-hidden-params.mdx';

<PartialParams />

| Name | Type | Description |
|------|------|-------------|
| api_key | SecretString | Input parameter. Your BurnCloud API key. Required for authentication and for fetching the latest model list. |
| base_url | String | Input parameter. Override the default `https://ai.burncloud.com` base URL if you host BurnCloud privately. The component appends `/v1` automatically when needed. (Advanced) |
| model_name | String | Input parameter. BurnCloud model to use. Options update dynamically after you provide a valid API key and click <Icon name="RefreshCw" aria-hidden="true" /> **Refresh**. Defaults to `gpt-4o`. |
| temperature | Float | Input parameter. Controls randomness. Range: `[0, 2]`. Defaults to `0.7`. (Advanced) |
| top_p | Float | Input parameter. Alternative sampling control that limits the cumulative probability mass of candidate tokens. Range: `[0, 1]`. Defaults to `1.0`. (Advanced) |
| max_tokens | Integer | Input parameter. Maximum number of tokens to generate. Leave empty to let BurnCloud decide. (Advanced) |
| input_value | String | Input parameter. The prompt or chat content you want to send to the model. |
| system_message | String | Input parameter. Sets the assistant's persona or high-level instructions. |
| stream | Boolean | Input parameter. Streams partial results when enabled. |
| output_parser | OutputParser | Input parameter. (Advanced) Parse the model response before passing it downstream. |
| model_output | LanguageModel | Output parameter. A `ChatOpenAI` instance configured for BurnCloud. |
| text_output | Message | Output parameter. The generated response from the selected BurnCloud model. |

### Use BurnCloud in a flow

1. Sign up for a [BurnCloud account](https://burncloud.com/) and generate an API key in the BurnCloud dashboard.
2. In Langflow, open <Icon name="Blocks" aria-hidden="true" /> **Bundles** and drag the **BurnCloud** component into your flow.
3. Paste your API key into **BurnCloud API Key**. Optionally set **Base URL** if your organization hosts BurnCloud privately.
4. Click <Icon name="RefreshCw" aria-hidden="true" /> **Refresh** next to **Model** to load the latest BurnCloud-hosted model list, then pick the model you need.
5. Configure sampling parameters such as **Temperature**, **Top P**, and **Max Output Tokens** (if required) along with your **System Message** and **Prompt**.
6. Connect **Chat Input** → **BurnCloud** → **Chat Output** (or feed the **Language Model** output into downstream components like **Agent** or **Smart Function**).
7. Click **Playground** to test requests and validate the connection before deploying the flow.
1 change: 1 addition & 0 deletions docs/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -304,6 +304,7 @@ module.exports = {
"Components/bundles-azure",
"Components/bundles-baidu",
"Components/bundles-bing",
"Components/bundles-burncloud",
"Components/bundles-cassandra",
"Components/bundles-chroma",
"Components/bundles-cleanlab",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2089,6 +2089,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1261,6 +1261,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1388,6 +1388,7 @@
"options": [
"OpenAI",
"Anthropic",
"BurnCloud",
"Google"
],
"options_metadata": [
Expand Down Expand Up @@ -1657,6 +1658,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1285,6 +1285,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -917,6 +917,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1346,6 +1346,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1713,6 +1713,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2098,6 +2098,7 @@
"options": [
"OpenAI",
"Anthropic",
"BurnCloud",
"Google"
],
"options_metadata": [
Expand Down Expand Up @@ -2686,6 +2687,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -975,6 +975,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1052,6 +1052,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -464,6 +464,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down Expand Up @@ -1197,6 +1198,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down Expand Up @@ -2701,6 +2703,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1037,6 +1037,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1395,6 +1395,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1770,6 +1770,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down Expand Up @@ -2496,6 +2497,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down Expand Up @@ -3222,6 +3224,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -866,6 +866,7 @@
"name": "agent_llm",
"options": [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
Expand Down
36 changes: 36 additions & 0 deletions src/frontend/src/icons/BurnCloud/BurnCloudIcon.jsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import { useId } from "react";

const BurnCloudIconSVG = ({ isDark = false, ...props }) => {
const gradientId = `burncloud-gradient-${useId()}`;
const start = isDark ? "#f9cf69" : "#f7b52c";
const end = isDark ? "#ff7a3a" : "#e95513";
return (
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
role="img"
aria-label="BurnCloud icon"
fill="none"
focusable="false"
{...props}
>
<defs>
<linearGradient
id={gradientId}
x2="1"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(-0.04 9.248 -11.433 -0.05 12.058 8.618)"
>
<stop offset="0" stopColor={start} />
<stop offset="1" stopColor={end} />
</linearGradient>
</defs>
<path
fill={`url(#${gradientId})`}
d="M17.8 10.1c-.6-.9-1.4-1.9-1.4-1.9s-1.8-2.1-1.5-5.2c0 0-6.9 2.7-7 8.2 0 0-1-1.6-.8-4.6 0 0-2.2 2.1-2.5 5.5-2.1.7-3.8 2.5-3.8 4.3 0 2.5 2.7 4.6 5.9 4.6-2.4-.4-4.2-2-4.2-4 0-1.4.8-2.5 2-3.3.1 1.1.5 2.4.5 2.4s1.2 3.8 5.4 4.8a6.8 6.8 0 0 0 3.7-.3c1.3-.6 2.8-1.8 2.8-4.5 0 0 .1-2.7-1.5-4.1 0 0 2.1 5-1.8 6.5a4.8 4.8 0 0 1-3.9 0c-1.7-.7-3.8-2.5-3.5-7.2 0 0 1 3.4 3.2 4.7 0 0-2-5.8 3.9-9.8 0 0 .5 2.1 1.9 3.3.4.4 4 3.2 3.3 8 .7-.9 1.3-3.1.7-4.8 0 0-.1-.4-.4-.9 1.5.3 2.7 1.5 2.8 4.2.1 2.3-1.6 4.2-3.8 5 3-.4 5.4-2.7 5.4-5.6 0-2.8-2.2-5.1-5.4-5.3Z"
/>
</svg>
);
};

export default BurnCloudIconSVG;
12 changes: 12 additions & 0 deletions src/frontend/src/icons/BurnCloud/BurnCloudIcon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 9 additions & 0 deletions src/frontend/src/icons/BurnCloud/index.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
import type React from "react";
import { forwardRef } from "react";
import BurnCloudIconSVG from "./BurnCloudIcon";

export const BurnCloudIcon = forwardRef<SVGSVGElement, React.PropsWithChildren<{}>>(
(props, ref) => <BurnCloudIconSVG ref={ref} {...props} />,
);

export default BurnCloudIcon;
2 changes: 2 additions & 0 deletions src/frontend/src/icons/eagerIconImports.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import { AthenaIcon } from "@/icons/athena/index";
import { BingIcon } from "@/icons/Bing";
import { BotMessageSquareIcon } from "@/icons/BotMessageSquare";
import { BWPythonIcon } from "@/icons/BW python";
import { BurnCloudIcon } from "@/icons/BurnCloud";
import { CassandraIcon } from "@/icons/Cassandra";
import { ChromaIcon } from "@/icons/ChromaIcon";
import { ClickhouseIcon } from "@/icons/Clickhouse";
Expand Down Expand Up @@ -133,6 +134,7 @@ export const eagerIconsMapping = {
AWSInverted: AWSInvertedIcon,
Azure: AzureIcon,
Bing: BingIcon,
BurnCloud: BurnCloudIcon,
BotMessageSquare: BotMessageSquareIcon,
BWPython: BWPythonIcon,
Cassandra: CassandraIcon,
Expand Down
4 changes: 4 additions & 0 deletions src/frontend/src/icons/lazyIconImports.ts
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,10 @@ export const lazyIconsMapping = {
import("@/icons/Brightdata").then((mod) => ({
default: mod.BrightdataIcon,
})),
BurnCloud: () =>
import("@/icons/BurnCloud").then((mod) => ({
default: mod.BurnCloudIcon,
})),
BWPython: () =>
import("@/icons/BW python").then((mod) => ({ default: mod.BWPythonIcon })),
Cassandra: () =>
Expand Down
1 change: 1 addition & 0 deletions src/frontend/src/utils/styleUtils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -252,6 +252,7 @@ export const SIDEBAR_BUNDLES = [
{ display_name: "Azure", name: "azure", icon: "Azure" },
{ display_name: "Baidu", name: "baidu", icon: "BaiduQianfan" },
{ display_name: "Bing", name: "bing", icon: "Bing" },
{ display_name: "BurnCloud", name: "BurnCloud", icon: "BurnCloud" },
{ display_name: "Cassandra", name: "cassandra", icon: "Cassandra" },
{ display_name: "Chroma", name: "chroma", icon: "Chroma" },
{ display_name: "ClickHouse", name: "clickhouse", icon: "Clickhouse" },
Expand Down
24 changes: 23 additions & 1 deletion src/lfx/src/lfx/base/models/model_input_constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,21 @@ def _get_sambanova_inputs_and_fields():
except ImportError:
pass

try:
from lfx.components.BurnCloud.burncloud import BurnCloudModel

burncloud_inputs = get_filtered_inputs(BurnCloudModel)
MODEL_PROVIDERS_DICT["BurnCloud"] = {
"fields": create_input_fields_dict(burncloud_inputs, ""),
"inputs": burncloud_inputs,
"prefix": "",
"component_class": BurnCloudModel(),
"icon": BurnCloudModel.icon,
"is_active": True,
}
except ImportError:
pass

try:
from lfx.components.nvidia.nvidia import NVIDIAModelComponent

Expand Down Expand Up @@ -373,6 +388,13 @@ def _get_sambanova_inputs_and_fields():

MODELS_METADATA = {name: {"icon": prov["icon"]} for name, prov in ACTIVE_MODEL_PROVIDERS_DICT.items()}

MODEL_PROVIDERS_LIST = ["Anthropic", "Google Generative AI", "OpenAI", "IBM watsonx.ai", "Ollama"]
MODEL_PROVIDERS_LIST = [
"Anthropic",
"BurnCloud",
"Google Generative AI",
"OpenAI",
"IBM watsonx.ai",
"Ollama",
]

MODEL_OPTIONS_METADATA = [MODELS_METADATA[key] for key in MODEL_PROVIDERS_LIST if key in MODELS_METADATA]
32 changes: 32 additions & 0 deletions src/lfx/src/lfx/components/BurnCloud/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
from __future__ import annotations

from typing import TYPE_CHECKING, Any

from lfx.components._importing import import_mod

if TYPE_CHECKING: # pragma: no cover
from .burncloud import BurnCloudModel

_dynamic_imports = {
"BurnCloudModel": "burncloud",
}

__all__ = ["BurnCloudModel"]


def __getattr__(attr_name: str) -> Any:
"""Lazily import BurnCloud components on attribute access."""
if attr_name not in _dynamic_imports:
msg = f"module '{__name__}' has no attribute '{attr_name}'"
raise AttributeError(msg)
try:
result = import_mod(attr_name, _dynamic_imports[attr_name], __spec__.parent)
except (ModuleNotFoundError, ImportError, AttributeError) as e: # pragma: no cover - thin wrapper
msg = f"Could not import '{attr_name}' from '{__name__}': {e}"
raise AttributeError(msg) from e
globals()[attr_name] = result
return result


def __dir__() -> list[str]: # pragma: no cover
return list(__all__)
Loading
Loading