Browse
16 - 30 of 131
- 11colqwen2-v1.0
By
UnsignedLatest ModelKit
ModelDatasets1Codebases1DocsConfiguration - 11qwen2-0.5b
By
UnsignedLatest ModelKit
ModelDatasetsCodebases2DocsConfiguration - 01javis-sms-detection
By
UnsignedLatest ModelKit
ModelDatasets1Codebases1DocsConfiguration - 01sms-spam-javis
By
UnsignedLatest ModelKit
ModelDatasets1Codebases1DocsConfiguration - 01omniparser-v2.0
By
UnsignedLatest ModelKit
ModelDatasets2Codebases1DocsConfiguration - 01bert-tiny-finetuned-sms-spam-detection
By
UnsignedLatest ModelKit
ModelDatasets1Codebases1DocsConfiguration - 131testing
By
UnsignedLatest ModelKit
ModelDatasets1CodebasesDocsConfiguration - 21gpt2-distilled-lora-alpaca
By
UnsignedLatest ModelKit
ModelDatasetsCodebases1DocsConfiguration - 00microsoft-phi-2
By
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
UnsignedLatest ModelKit
ModelDatasetsCodebasesDocsConfiguration - 01llama3-githubactions
By
UnsignedLatest ModelKit
ModelDatasets3CodebasesDocsConfiguration - 00microsoft_phi-2
By
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
UnsignedLatest ModelKit
ModelDatasetsCodebasesDocsConfiguration - 00microsoft_phi-4
By
phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures
UnsignedLatest ModelKit
ModelDatasetsCodebasesDocsConfiguration - 00fraud-detection-model
By
Fraud detection model using sklearn
UnsignedLatest ModelKit
ModelDatasetsCodebasesDocsConfiguration - 00testrepo
By
Lorem ipsum is a dummy or placeholder text commonly used in graphic design, publishing, and web development to fill empty spaces in a layout that does not yet have content.
UnsignedLatest ModelKit
ModelDatasetsCodebasesDocsConfiguration - 00quick-start
By
UnsignedLatest ModelKit
ModelDatasetsCodebasesDocsConfiguration