Browse
61 - 75 of 180
- 755wine-quality
By
UnsignedLatest ModelKit
Model1Datasets3Codebases10DocsConfiguration - 11deepseek-r1-distill-qwen-1.5b
By
UnsignedLatest ModelKit
ModelDatasets1Codebases2DocsConfiguration - 174wine-quality
By
UnsignedLatest ModelKit
Model2Datasets1Codebases1DocsConfiguration - 223llama-3.1-8b-instruct
By
UnsignedLatest ModelKit
ModelDatasetsCodebases3DocsConfiguration - 11colqwen2-v1.0
By
UnsignedLatest ModelKit
ModelDatasets1Codebases1DocsConfiguration - 81qwen2-0.5b
By
UnsignedLatest ModelKit
ModelDatasetsCodebases2DocsConfiguration - 11javis-sms-detection
By
UnsignedLatest ModelKit
ModelDatasets1Codebases1DocsConfiguration - 21sms-spam-javis
By
UnsignedLatest ModelKit
ModelDatasets1Codebases1DocsConfiguration - 11omniparser-v2.0
By
UnsignedLatest ModelKit
ModelDatasets2Codebases1DocsConfiguration - 01bert-tiny-finetuned-sms-spam-detection
By
UnsignedLatest ModelKit
ModelDatasets1Codebases1DocsConfiguration - 131testing
By
UnsignedLatest ModelKit
ModelDatasets1CodebasesDocsConfiguration - 21gpt2-distilled-lora-alpaca
By
UnsignedLatest ModelKit
ModelDatasetsCodebases1DocsConfiguration - 00microsoft-phi-2
By
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
UnsignedLatest ModelKit
ModelDatasetsCodebasesDocsConfiguration - 11llama3-githubactions
By
UnsignedLatest ModelKit
ModelDatasets3CodebasesDocsConfiguration - 00microsoft_phi-2
By
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
UnsignedLatest ModelKit
ModelDatasetsCodebasesDocsConfiguration