sghosts

32 models • 1 total models in database
Sort by:

qwen0.6-bsc-pdf-filtered-model

95
0

qwen0.6-bsc-mixture-model

85
0

qwen0.6-bsc-culturax-only-model

83
0

CosmosGemma-9b_bsc

NaNK
43
0

CosmosGemma-9b-bsc-filtered

NaNK
37
0

Qwen3-0.6_bsc_v2

35
0

gemma3-1b-10k_data_uhem_pipeline

NaNK
33
0

CosmosGemma-9b-bsc-culturax-only

NaNK
32
0

Qwen3-0.6B_textholder_bsc

NaNK
31
0

CosmosGemma 9b Bsc Mixture

NaNK
26
1

Qwen3-0.6B-10k_data_uhem_pipeline

NaNK
24
0

CosmosGemma-9b_bsc_mixture

This model is a fine-tuned version of None. It has been trained using TRL. - PEFT 0.17.1 - TRL: 0.23.1 - Transformers: 4.57.0 - Pytorch: 2.8.0 - Datasets: 4.1.1 - Tokenizers: 0.22.1

NaNK
23
0

gemma3-1b-10k_bsc

NaNK
23
0

turkish-gpt2-medium-finetuned-1gb-cX-corpus

This model is a fine-tuned version of ytu-ce-cosmos/turkish-gpt2-medium on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5434 The following hyperparameters were used during training: - learningrate: 0.0001 - trainbatchsize: 4 - evalbatchsize: 8 - seed: 42 - distributedtype: multi-GPU - numdevices: 4 - gradientaccumulationsteps: 16 - totaltrainbatchsize: 256 - totalevalbatchsize: 32 - optimizer: Use adamwtorch with betas=(0.9,0.999) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: cosine - lrschedulerwarmupratio: 0.1 - numepochs: 1.0 | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.7299 | 0.0573 | 200 | 2.6433 | | 2.6926 | 0.1147 | 400 | 2.6090 | | 2.6536 | 0.1720 | 600 | 2.5891 | | 2.6492 | 0.2293 | 800 | 2.5791 | | 2.627 | 0.2866 | 1000 | 2.5694 | | 2.6162 | 0.3440 | 1200 | 2.5619 | | 2.6192 | 0.4013 | 1400 | 2.5568 | | 2.6585 | 0.4586 | 1600 | 2.5528 | | 2.6232 | 0.5159 | 1800 | 2.5499 | | 2.6252 | 0.5733 | 2000 | 2.5472 | | 2.6295 | 0.6306 | 2200 | 2.5454 | | 2.5898 | 0.6879 | 2400 | 2.5444 | | 2.6553 | 0.7453 | 2600 | 2.5439 | | 2.611 | 0.8026 | 2800 | 2.5435 | | 2.6149 | 0.8599 | 3000 | 2.5435 | | 2.5969 | 0.9172 | 3200 | 2.5434 | | 2.6132 | 0.9746 | 3400 | 2.5434 | - Transformers 4.50.0 - Pytorch 2.5.1+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1

license:mit
19
0

gemma3-1b-10k_data_uhem_pipeline_onlytr

NaNK
19
0

bilsem-qwen3vlit-8b-gemmini3pro-101

NaNK
license:apache-2.0
18
0

Qwen3-0.6B-10k_bsc

NaNK
18
0

CosmosGemma-9b_bsc_culturax_only

Model Card for Turkish-Gemma-9b-v0.1culturaxonlyoutput This model is a fine-tuned version of None. It has been trained using TRL. - PEFT 0.17.1 - TRL: 0.23.1 - Transformers: 4.57.0 - Pytorch: 2.8.0 - Datasets: 4.1.1 - Tokenizers: 0.22.1

NaNK
17
0

gemma3-1b-10k_data_uhem_pipeline_notables

NaNK
17
0

Qwen3-0.6B-10k_data_uhem_pipeline_notables

NaNK
17
0

Qwen3-0.6B-10k_data_uhem_pipeline_onlytr

NaNK
15
0

bilsem-qwen3vlit-8b-gemmini3pro-011

NaNK
license:apache-2.0
13
0

bilsem-qwen3vlit-8b-gemmini3pro-110

NaNK
license:apache-2.0
13
0

CosmosGemma-9b_bsc_filtered

Model Card for Turkish-Gemma-9b-v0.1textholderfilteredbscoutput10k1 This model is a fine-tuned version of None. It has been trained using TRL. - PEFT 0.17.1 - TRL: 0.23.1 - Transformers: 4.57.0 - Pytorch: 2.8.0 - Datasets: 4.1.1 - Tokenizers: 0.22.1

NaNK
13
0

Gemma3-1B_textholder_bsc

NaNK
12
0

pvqa_org_1_05

9
0

turkish-gpt2-medium-finetuned-pdfs

This model is a fine-tuned version of ytu-ce-cosmos/turkish-gpt2-medium on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6251 The following hyperparameters were used during training: - learningrate: 0.0001 - trainbatchsize: 4 - evalbatchsize: 8 - seed: 42 - gradientaccumulationsteps: 16 - totaltrainbatchsize: 64 - optimizer: Use OptimizerNames.ADAMWTORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: cosine - lrschedulerwarmupratio: 0.1 - numepochs: 1.0 | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.8161 | 0.0285 | 50 | 2.8298 | | 2.7675 | 0.0569 | 100 | 2.7596 | | 2.6885 | 0.0854 | 150 | 2.7324 | | 2.6692 | 0.1139 | 200 | 2.7224 | | 2.6849 | 0.1423 | 250 | 2.7088 | | 2.6689 | 0.1708 | 300 | 2.7013 | | 2.6558 | 0.1993 | 350 | 2.6972 | | 2.6076 | 0.2277 | 400 | 2.6840 | | 2.5762 | 0.2562 | 450 | 2.6823 | | 2.6125 | 0.2847 | 500 | 2.6756 | | 2.5573 | 0.3131 | 550 | 2.6679 | | 2.6253 | 0.3416 | 600 | 2.6617 | | 2.5285 | 0.3701 | 650 | 2.6608 | | 2.523 | 0.3985 | 700 | 2.6525 | | 2.4611 | 0.4270 | 750 | 2.6500 | | 2.5456 | 0.4555 | 800 | 2.6462 | | 2.5815 | 0.4840 | 850 | 2.6421 | | 2.4772 | 0.5124 | 900 | 2.6398 | | 2.5755 | 0.5409 | 950 | 2.6356 | | 2.5165 | 0.5694 | 1000 | 2.6335 | | 2.5441 | 0.5978 | 1050 | 2.6321 | | 2.5212 | 0.6263 | 1100 | 2.6301 | | 2.57 | 0.6548 | 1150 | 2.6283 | | 2.5052 | 0.6832 | 1200 | 2.6277 | | 2.5508 | 0.7117 | 1250 | 2.6271 | | 2.4813 | 0.7402 | 1300 | 2.6261 | | 2.5459 | 0.7686 | 1350 | 2.6257 | | 2.4531 | 0.7971 | 1400 | 2.6255 | | 2.4906 | 0.8256 | 1450 | 2.6253 | | 2.5867 | 0.8540 | 1500 | 2.6251 | | 2.5177 | 0.8825 | 1550 | 2.6251 | | 2.4529 | 0.9110 | 1600 | 2.6251 | | 2.5726 | 0.9394 | 1650 | 2.6251 | | 2.5035 | 0.9679 | 1700 | 2.6251 | | 2.5258 | 0.9964 | 1750 | 2.6251 | - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4

license:mit
6
0

bilsem-qwen3vlit-8b-gemmini3pro-110-lora

NaNK
2
0

turkish-gpt2-medium-finetuned-corpus

This model is a fine-tuned version of ytu-ce-cosmos/turkish-gpt2-medium on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3479 The following hyperparameters were used during training: - learningrate: 0.0001 - trainbatchsize: 4 - evalbatchsize: 8 - seed: 42 - distributedtype: multi-GPU - numdevices: 4 - gradientaccumulationsteps: 16 - totaltrainbatchsize: 256 - totalevalbatchsize: 32 - optimizer: Use adamwtorch with betas=(0.9,0.999) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: cosine - lrschedulerwarmupratio: 0.1 - numepochs: 1.0 | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.536 | 0.0823 | 200 | 2.4685 | | 2.4871 | 0.1646 | 400 | 2.4177 | | 2.4659 | 0.2469 | 600 | 2.3919 | | 2.4385 | 0.3292 | 800 | 2.3762 | | 2.4381 | 0.4115 | 1000 | 2.3646 | | 2.4167 | 0.4938 | 1200 | 2.3573 | | 2.4163 | 0.5761 | 1400 | 2.3525 | | 2.4225 | 0.6584 | 1600 | 2.3497 | | 2.4252 | 0.7407 | 1800 | 2.3484 | | 2.407 | 0.8230 | 2000 | 2.3480 | | 2.3911 | 0.9053 | 2200 | 2.3480 | | 2.4119 | 0.9876 | 2400 | 2.3479 | - Transformers 4.50.0 - Pytorch 2.5.1+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1

license:mit
2
0

llama-3.2-1b-finetuned-corpus

NaNK
llama
2
0

llama-3.2-1b-finetuned-1gb-cX-corpus

NaNK
llama
2
0

gpt2-finetuned-turkish-filtered

license:mit
1
0