nephrology-gemma-300m-emb
2
—
by
yasserrmd
Embedding Model
OTHER
1705.00652B params
New
2 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3811GB+ RAM
Mobile
Laptop
Server
Quick Summary
SentenceTransformer based on google/embeddinggemma-300m This is a sentence-transformers model finetuned from google/embeddinggemma-300m.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1588GB+ RAM
Training Data Analysis
🟡 Average (4.3/10)
Researched training datasets used by nephrology-gemma-300m-emb with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (3)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Usagebash
pip install -U sentence-transformersUsagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yasserrmd/nephrology-gemma-300m-emb")
# Run inference
queries = [
"How do some participants believe that reimbursement or compensation for living kidney donors can help minimize disadvantage?",
]
documents = [
'Some participants believe that reimbursement or compensation can effectively help donors and recipients who are socioeconomically disadvantaged by removing financial barriers to donation. They advocate for government subsidies or special paid leave to support potential donors who may not be able to take leave or afford donation-related expenses. The goal is to ensure that financial constraints do not penalize individuals who are willing to donate.',
'The time in therapeutic range (TTR) of INR (International Normalized Ratio) is an important factor in determining the risk of hemorrhagic and ischemic events in hemodialysis patients. If the INR is below 1.5, there is an increased risk of hemorrhagic events, while an INR above 5 increases the risk of ischemic events. Maintaining the INR within the therapeutic range is challenging but crucial in minimizing these risks.',
'Urinary L-PGDS excretions have been found to be superior to other markers, including urinary excretions of type-IV collagen, beta-2 microglobulin, and NAG, as well as serum creatinine levels, in predicting renal injury in type-2 diabetes. Studies have shown that urinary L-PGDS excretions better predict ≥30 mg/gCr albuminuria in type-2 diabetes. The use of urinary L-PGDS excretions as a marker for renal injury in type-2 diabetes is supported by its ability to reflect a slight change in glomerular permeability and its positive correlation with albuminuria.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.6341, 0.0019, 0.0465]])Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.