Language Models

Inference with pipeline

Quick start

Inference with you data

Inference with multi-threads on CPU

Inference with multi GPU

Finetune with pipeline

Quick start

Finetune with your data

Inference with your finetuned model