modeling_qwen2

Classes

Module Contents

class modeling_qwen2.Qwen2RMSNorm(hidden_size, eps=1e-06, quant_bits=16)

Bases: pymllm.backends.qualcomm.transformers.core.rms_norm.QRMSNorm

Parameters:

eps (float)

class modeling_qwen2.Qwen2PreTrainedModel

Bases: transformers.modeling_utils.PreTrainedModel

config: transformers.models.qwen2.configuration_qwen2.Qwen2Config
base_model_prefix = 'model'
supports_gradient_checkpointing = True
class modeling_qwen2.Qwen2Model(config)

Bases: Qwen2PreTrainedModel

Parameters:

config (transformers.models.qwen2.configuration_qwen2.Qwen2Config)

padding_idx
vocab_size
embed_tokens
layers
norm
rotary_emb
gradient_checkpointing = False
has_sliding_layers
sin_embedding_input_qdq
cos_embedding_input_qdq
norm_input_qdq
convert_rope_for_deploy()
forward(input_ids=None, attention_mask=None, position_ids=None, past_key_values=None, inputs_embeds=None, use_cache=None, cache_position=None, **kwargs)
Parameters:
  • input_ids (Optional[torch.LongTensor])

  • attention_mask (Optional[torch.Tensor])

  • position_ids (Optional[torch.LongTensor])

  • past_key_values (Optional[transformers.cache_utils.Cache])

  • inputs_embeds (Optional[torch.FloatTensor])

  • use_cache (Optional[bool])

  • cache_position (Optional[torch.LongTensor])

  • kwargs (transformers.processing_utils.Unpack[transformers.utils.TransformersKwargs])

Return type:

transformers.modeling_outputs.BaseModelOutputWithPast

class modeling_qwen2.Qwen2ForCausalLM(config)

Bases: Qwen2PreTrainedModel, transformers.generation.GenerationMixin

config
model
vocab_size
lm_head
mllm_qualcomm_max_length = None
lm_head_input_qdq
lm_head_output_qdq
copy_lm_head_weight_from_embed_tokens()
forward(input_ids=None, attention_mask=None, position_ids=None, past_key_values=None, inputs_embeds=None, labels=None, use_cache=None, cache_position=None, logits_to_keep=0, **kwargs)

Example:

```python >>> from transformers import AutoTokenizer, Qwen2ForCausalLM

>>> model = Qwen2ForCausalLM.from_pretrained("meta-qwen2/Qwen2-2-7b-hf")
>>> tokenizer = AutoTokenizer.from_pretrained("meta-qwen2/Qwen2-2-7b-hf")
>>> prompt = "Hey, are you conscious? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
```
Parameters:
  • input_ids (Optional[torch.LongTensor])

  • attention_mask (Optional[torch.Tensor])

  • position_ids (Optional[torch.LongTensor])

  • past_key_values (Optional[transformers.cache_utils.Cache])

  • inputs_embeds (Optional[torch.FloatTensor])

  • labels (Optional[torch.LongTensor])

  • use_cache (Optional[bool])

  • cache_position (Optional[torch.LongTensor])

  • logits_to_keep (Union[int, torch.Tensor])

  • kwargs (transformers.processing_utils.Unpack[transformers.utils.TransformersKwargs])

Return type:

transformers.modeling_outputs.CausalLMOutputWithPast

class modeling_qwen2.Qwen2ForSequenceClassification

Bases: transformers.modeling_layers.GenericForSequenceClassification, Qwen2PreTrainedModel

class modeling_qwen2.Qwen2ForTokenClassification

Bases: transformers.modeling_layers.GenericForTokenClassification, Qwen2PreTrainedModel

class modeling_qwen2.Qwen2ForQuestionAnswering

Bases: transformers.modeling_layers.GenericForQuestionAnswering, Qwen2PreTrainedModel

base_model_prefix = 'transformer'