Use local LLMs in JS/TS/Node
LM Studio Client SDK
lmstudio-ts
is LM Studio's official JavaScript/TypeScript client SDK, it allows you to
- Use LLMs to respond in chats or predict text completions
- Define functions as tools, and turn LLMs into autonomous agents that run completely locally
- Load, configure, and unload models from memory
- Supports both browser and any Node-compatible environments
- Generate embeddings for text, and more!
Using python? See lmstudio-python
Installation
npm install @lmstudio/sdk --save
Quick Example
import { LMStudioClient } from "@lmstudio/sdk";
const client = new LMStudioClient();
const model = await client.llm.model("llama-3.2-1b-instruct");
const result = await model.respond("What is the meaning of life?");
console.info(result.content);
For more examples and documentation, visit lmstudio-js docs.
lmstudio-js
over openai
sdk?
Why use Open AI's SDK is designed to use with Open AI's proprietary models. As such, it is missing many features that are essential for using LLMs in a local environment, such as:
- Managing loading and unloading models from memory
- Configuring load parameters (context length, gpu offload settings, etc.)
- Speculative decoding
- Getting information (such as context length, model size, etc.) about a model
- ... and more
In addition, while openai
sdk is automatically generated, lmstudio-js
is designed from ground-up to be clean and easy to use for TypeScript/JavaScript developers.
Contributing
You can build the project locally by following these steps:
git clone https://github.com/lmstudio-ai/lmstudio-js.git --recursive
cd lmstudio-js
npm install
npm run build
See CONTRIBUTING.md for more information.
Community
Discuss all things lmstudio-js in #dev-chat in LM Studio's Community Discord server.