Skip to content

HuggingFace login #168

Answered by codelion
SeriousJ55 asked this question in Q&A
Mar 3, 2025 · 3 comments · 6 replies
Discussion options

You must be logged in to vote

Yes, I can implement it, meanwhile you can try the below snippet:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load model and tokenizer
model_name = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Define your input
input_text = "Hello, how are"
inputs = tokenizer(input_text, return_tensors="pt")

# Generate output with log probabilities
outputs = model.generate(
    **inputs,
    max_new_tokens=10,
    output_scores=True,
    return_dict_in_generate=True
)

# Retrieve generated tokens
generated_tokens = outputs.sequences[0]
generated_text = tokenizer.decode(generated_tokens)

# Calcula…

Replies: 3 comments 6 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@codelion
Comment options

Comment options

You must be logged in to vote
5 replies
@codelion
Comment options

@SeriousJ55
Comment options

@codelion
Comment options

Answer selected by codelion
@SeriousJ55
Comment options

@codelion
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #167 on March 04, 2025 02:05.