llms#
LLM
objects help run an LLM on prompts. All LLMs derive from the
LLM
base class.
Tip
Instead of using run()
directly, use a
step
that takes an LLM
as an args
argument such as Prompt
or
FewShotPrompt
.
- class datadreamer.llms.LLM(cache_folder_path=None)[source]#
Bases:
_Cachable
- format_prompt(max_new_tokens=None, beg_instruction=None, in_context_examples=None, end_instruction=None, sep='\\n', min_in_context_examples=None, max_in_context_examples=None)[source]#
- Return type:
- abstract run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.OpenAI(model_name, system_prompt=None, organization=None, api_key=None, base_url=None, api_version=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LLM
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.OpenAIAssistant(model_name, system_prompt=None, tools=None, organization=None, api_key=None, base_url=None, api_version=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
OpenAI
- class datadreamer.llms.HFTransformers(model_name, chat_prompt_template=AUTO, system_prompt=AUTO, revision=None, trust_remote_code=False, device=None, device_map=None, dtype=None, load_in_4bit=False, load_in_8bit=False, quantization_config=None, adapter_name=None, adapter_kwargs=None, cache_folder_path=None, **kwargs)[source]#
Bases:
LLM
Loads a LLM via Hugging Face Transformers.
- Parameters:
model_name (
str
) – Test.chat_prompt_template (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.system_prompt (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.revision (
Optional
[str
], default:None
) – _description_. Defaults to None.trust_remote_code (
bool
, default:False
) – _description_. Defaults to False.device (
Union
[None
,int
,str
,device
], default:None
) – _description_. Defaults to None.device_map (
Union
[None
,dict
,str
], default:None
) – _description_. Defaults to None.dtype (
Union
[None
,str
,dtype
], default:None
) – _description_. Defaults to None.load_in_4bit (
bool
, default:False
) – _description_. Defaults to False.load_in_8bit (
bool
, default:False
) – _description_. Defaults to False.quantization_config (
Union
[None
,QuantizationConfigMixin
,dict
], default:None
) – _description_. Defaults to None.adapter_name (
Optional
[str
], default:None
) – _description_. Defaults to None.adapter_kwargs (
Optional
[dict
], default:None
) – _description_. Defaults to None.cache_folder_path (
Optional
[str
], default:None
) – _description_. Defaults to None.
- Raises:
ValueError – _description_
- Variables:
chat_prompt_template – The chat prompt template the model is using.
system_prompt – The system prompt the model is using.
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=True, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.CTransformers(model_name, model_type=None, model_file=None, max_context_length=None, chat_prompt_template=AUTO, system_prompt=AUTO, revision=None, threads=None, gpu_layers=0, cache_folder_path=None, **kwargs)[source]#
Bases:
HFTransformers
Loads a LLM via Hugging Face Transformers.
- Parameters:
model_name (
str
) – Test.chat_prompt_template (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.system_prompt (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.revision (
Optional
[str
], default:None
) – _description_. Defaults to None.trust_remote_code – _description_. Defaults to False.
device – _description_. Defaults to None.
device_map – _description_. Defaults to None.
dtype – _description_. Defaults to None.
load_in_4bit – _description_. Defaults to False.
load_in_8bit – _description_. Defaults to False.
quantization_config – _description_. Defaults to None.
adapter_name – _description_. Defaults to None.
adapter_kwargs – _description_. Defaults to None.
cache_folder_path (
Optional
[str
], default:None
) – _description_. Defaults to None.
- Raises:
ValueError – _description_
- Variables:
chat_prompt_template – The chat prompt template the model is using.
system_prompt – The system prompt the model is using.
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.VLLM(model_name, chat_prompt_template=AUTO, system_prompt=AUTO, revision=None, trust_remote_code=False, dtype=None, quantization=None, swap_space=1, cache_folder_path=None, **kwargs)[source]#
Bases:
HFTransformers
Loads a LLM via Hugging Face Transformers.
- Parameters:
model_name (
str
) – Test.chat_prompt_template (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.system_prompt (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.revision (
Optional
[str
], default:None
) – _description_. Defaults to None.trust_remote_code (
bool
, default:False
) – _description_. Defaults to False.device – _description_. Defaults to None.
device_map – _description_. Defaults to None.
dtype (
Union
[None
,str
,dtype
], default:None
) – _description_. Defaults to None.load_in_4bit – _description_. Defaults to False.
load_in_8bit – _description_. Defaults to False.
quantization_config – _description_. Defaults to None.
adapter_name – _description_. Defaults to None.
adapter_kwargs – _description_. Defaults to None.
cache_folder_path (
Optional
[str
], default:None
) – _description_. Defaults to None.
- Raises:
ValueError – _description_
- Variables:
chat_prompt_template – The chat prompt template the model is using.
system_prompt – The system prompt the model is using.
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.Petals(model_name, chat_prompt_template=AUTO, system_prompt=AUTO, revision=None, trust_remote_code=False, device=None, dtype=None, adapter_name=None, cache_folder_path=None, **kwargs)[source]#
Bases:
HFTransformers
Loads a LLM via Hugging Face Transformers.
- Parameters:
model_name (
str
) – Test.chat_prompt_template (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.system_prompt (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.revision (
Optional
[str
], default:None
) – _description_. Defaults to None.trust_remote_code (
bool
, default:False
) – _description_. Defaults to False.device (
Union
[None
,int
,str
,device
], default:None
) – _description_. Defaults to None.device_map – _description_. Defaults to None.
dtype (
Union
[None
,str
,dtype
], default:None
) – _description_. Defaults to None.load_in_4bit – _description_. Defaults to False.
load_in_8bit – _description_. Defaults to False.
quantization_config – _description_. Defaults to None.
adapter_name (
Optional
[str
], default:None
) – _description_. Defaults to None.adapter_kwargs – _description_. Defaults to None.
cache_folder_path (
Optional
[str
], default:None
) – _description_. Defaults to None.
- Raises:
ValueError – _description_
- Variables:
chat_prompt_template – The chat prompt template the model is using.
system_prompt – The system prompt the model is using.
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=True, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.HFAPIEndpoint(endpoint, model_name, chat_prompt_template=AUTO, system_prompt=AUTO, token=None, revision=None, trust_remote_code=False, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
HFTransformers
Loads a LLM via Hugging Face Transformers.
- Parameters:
model_name (
str
) – Test.chat_prompt_template (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.system_prompt (
UnionType
[None
,str
,Default
], default:AUTO
) – _description_. Defaults to AUTO.revision (
Optional
[str
], default:None
) – _description_. Defaults to None.trust_remote_code (
bool
, default:False
) – _description_. Defaults to False.device – _description_. Defaults to None.
device_map – _description_. Defaults to None.
dtype – _description_. Defaults to None.
load_in_4bit – _description_. Defaults to False.
load_in_8bit – _description_. Defaults to False.
quantization_config – _description_. Defaults to None.
adapter_name – _description_. Defaults to None.
adapter_kwargs – _description_. Defaults to None.
cache_folder_path (
Optional
[str
], default:None
) – _description_. Defaults to None.
- Raises:
ValueError – _description_
- Variables:
chat_prompt_template – The chat prompt template the model is using.
system_prompt – The system prompt the model is using.
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.Together(model_name, chat_prompt_template=AUTO, system_prompt=AUTO, api_key=None, max_context_length=None, tokenizer_model_name=None, tokenizer_revision=None, tokenizer_trust_remote_code=False, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LLMAPI
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.MistralAI(model_name, api_key=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LLMAPI
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.Anthropic(model_name, api_key=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LiteLLM
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.Cohere(model_name, api_key=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LiteLLM
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.AI21(model_name, api_key=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LiteLLM
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.Bedrock(model_name, aws_access_key_id=None, aws_secret_access_key=None, aws_region_name=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LiteLLM
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.PaLM(model_name, api_key=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LiteLLM
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#
- class datadreamer.llms.VertexAI(model_name, vertex_project=None, vertex_location=None, retry_on_fail=True, cache_folder_path=None, **kwargs)[source]#
Bases:
LiteLLM
- run(prompts, max_new_tokens=None, temperature=1.0, top_p=0.0, n=1, stop=None, repetition_penalty=None, logit_bias=None, batch_size=10, batch_scheduler_buffer_size=None, adaptive_batch_size=False, seed=None, progress_interval=60, force=False, cache_only=False, verbose=None, log_level=None, total_num_prompts=None, return_generator=False, **kwargs)[source]#