LobeChat
Ctrl K
Back to Discovery
FireworksFireworks
@Fireworks AI
23 models
Fireworks AI is a leading provider of advanced language model services, focusing on functional calling and multimodal processing. Its latest model, Firefunction V2, is based on Llama-3, optimized for function calling, conversation, and instruction following. The visual language model FireLLaVA-13B supports mixed input of images and text. Other notable models include the Llama series and Mixtral series, providing efficient multilingual instruction following and generation support.

Supported Models

Fireworks
Maximum Context Length
8K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
32K
Maximum Output Length
--
Input Price
--
Output Price
--

Using Fireworks AI in LobeChat

cover

Fireworks.ai is a high-performance generative AI model inference platform that allows users to access and utilize various models through its API. The platform supports multiple modalities, including text and visual language models, and offers features like function calls and JSON schemas to enhance the flexibility of application development.

This article will guide you on how to use Fireworks AI in LobeChat.

Step 1: Obtain an API Key for Fireworks AI

  • Log in to the Fireworks.ai Console
  • Navigate to the User page and click on API Keys
  • Create a new API key
Create API Key
  • Copy and securely save the generated API key
Save API Key

Please store the key securely, as it will appear only once. If you accidentally lose it, you will need to create a new key.

Step 2: Configure Fireworks AI in LobeChat

  • Access the Settings interface in LobeChat
  • Under Language Model, locate the settings for Fireworks AI
Enter API Key
  • Enter the obtained API key
  • Select a Fireworks AI model for your AI assistant to start a conversation
Select Fireworks AI Model and Start Conversation

Please note that you may need to pay fees to the API service provider during use; refer to Fireworks AI's pricing policy for details.

You are now ready to use the models provided by Fireworks AI for conversations in LobeChat.

Related Providers

OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
Google
GeminiGemini
@Google
14 models
Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.