salamandraTA-7B-academic

25
7.0B
llama
by
BSC-LT
Language Model
OTHER
7B params
New
25 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary

This repository contains the model SalamandraTA-7B-academic, which is a Machine Translation fine-tunning of the Salamandra7B-Instruct.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM

Code Examples

Input parameterspythontransformers
from datetime import datetime
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model_id = "LangTech-MT/salamandraTA-7B-academic"

# Input parameters
source = 'English'
target = 'Spanish'
sentence = "With the purpose of analyzing women’s perceptions and classifying their modes of understanding a positive human papillomavirus (HPV+) test, we conducted 38 in‑depth interviews with women who had received an HPV diagnosis (normal and abnormal Pap smear), screened in Jujuy’s public health system in 2016. A typology based on women’s understandings of the result was developed: 1) understanding; 2) lack of understanding; a) underestimation; b) overestimation; c) confusion. The interviewees who experienced confusion over the results reported contradictory perceptions in relation to a positive HPV test and its severity; those who underestimated it tended to mention the absence of symptoms and expressed little concern over the result; while those who overestimated it considered themselves sick and described concern, narrating a biographical disruption and physical pain. These findings confirm the need to improve the delivery of results and the provision of information in order to decrease psychosocial impact and increase follow‑up adherence in HPV‑positive women."
 
text = f"Translate the following text from {source} into {target}.\n{source}: {sentence} \n{target}:"

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16
  )

# Construct prompt using chat template
message = [ { "role": "user", "content": text } ]
date_string = datetime.today().strftime('%Y-%m-%d')

prompt = tokenizer.apply_chat_template(
    message,
    tokenize=False,
    add_generation_prompt=True,
    date_string=date_string
)

inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
input_length = inputs.shape[1]

# Generate output
outputs = model.generate(
    input_ids=inputs.to(model.device), 
    max_new_tokens=400,
    early_stopping=True,
    num_beams=5
)

# Decode and print output
print(tokenizer.decode(outputs[0, input_length:], skip_special_tokens=True))
# Con el propósito de analizar las percepciones de las mujeres y clasificar sus modos de comprensión de un resultado positivo de virus del papiloma humano (VPH+), en 2016 realizamos 38 entrevistas en profundidad a mujeres con diagnóstico de VPH (citología normal y anormal) detectado en el sistema público de salud de Jujuy. Se elaboró una tipología basada en la comprensión del resultado por parte de las mujeres: 1) comprensión; 2) falta de comprensión; a) subestimación; b) sobreestimación; c) confusión. Las entrevistadas que experimentaron confusión informaron percepciones contradictorias sobre el VPH+ y su gravedad; quienes lo subestimaron tendían a mencionar la ausencia de síntomas y mostraron poca preocupación; mientras que aquellas que lo sobreestimaron se consideraban enfermas, describían preocupación, narrando una ruptura biográfica y dolor físico. Estos hallazgos confirman la necesidad de mejorar la entrega de resultados y la provisión de información para disminuir el impacto psicosocial y aumentar la adherencia al seguimiento en mujeres con VPH positivo.

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.