Go to home

Catalog

241 - 252 of 299

0
0
aopq1-22

By

Work tool

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
test

By

test description

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
kit-ray

By

Integrate Ray with Kitops

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
quick-start

By

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
testrepo

By

Lorem ipsum is a dummy or placeholder text commonly used in graphic design, publishing, and web development to fill empty spaces in a layout that does not yet have content.

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
fraud-detection-model

By

Fraud detection model using sklearn

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
microsoft_phi-4

By

phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
microsoft_phi-2

By

Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
microsoft-phi-2

By

Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
dvctokitops

By

Migrating DVC files to KItOps

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
ayaan

By

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration
0
0
jozutest

By

adfsdfgserfg

Unsigned

Latest ModelKit

Model
Datasets
Codebases
Docs
Configuration