Go to home

Catalog

205 - 216 of 317

303
1
qwen2-0.5b

By

Unsigned

Latest ModelKit

Model
Datasets
Codebases
2
Docs
Configuration
190
1
javis-sms-detection

By

Unsigned

Latest ModelKit

Model
Datasets
1
Codebases
1
Docs
Configuration
164
1
sms-spam-javis

By

Unsigned

Latest ModelKit

Model
Datasets
1
Codebases
1
Docs
Configuration
121
1
omniparser-v2.0

By

Unsigned

Latest ModelKit

Model
Datasets
2
Codebases
1
Docs
Configuration
240
1
bert-tiny-finetuned-sms-spam-detection

By

Unsigned

Latest ModelKit

Model
Datasets
1
Codebases
1
Docs
Configuration
71
3
testing

By

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
77
1
gpt2-distilled-lora-alpaca

By

Unsigned

Latest ModelKit

Model
Datasets
Codebases
1
Docs
Configuration
0
0
microsoft-phi-2

By

Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
113
1
llama3-githubactions

By

Unsigned

Latest ModelKit

Model
Datasets
3
Codebases
Docs
Configuration
0
0
microsoft_phi-2

By

Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
microsoft_phi-4

By

phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
fraud-detection-model

By

Fraud detection model using sklearn

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration