bge-small-en-v1-5-ft-test-run
1
—
by
magnifi
Embedding Model
OTHER
1705.00652B params
New
1 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3811GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1588GB+ RAM
Code Examples
Usagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagebash
pip install -U sentence-transformersUsagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Usagepython
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Market news from [DATES]',
'[{"get_news_articles(None,None,None,\'<DATES>\')": "news_data"}, {"get_attribute([\'SPY\'],[\'returns\'],\'<DATES>\')":"SPY_returns"}, {"get_attribute([\'DIA\'],[\'returns\'],\'<DATES>\')":"DIA_returns"}, {"get_attribute([\'QQQ\'],[\'returns\'],\'<DATES>\')":"QQQ_returns"}]',
'[{"get_dividend_history([\'<TICKER>\'],None)": "<TICKER>_dividend_history"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.