This commit is contained in:
Neurochain 2024-03-08 18:09:36 +02:00
parent d110db2595
commit ab4f0777cd
684 changed files with 5231175 additions and 0 deletions

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-100/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-100/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-100/scheduler.pt (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,43 @@
{
"add_bos_token": false,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,321 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 25.0,
"eval_steps": 5,
"global_step": 100,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.5,
"learning_rate": 2.9999177540482684e-05,
"loss": 2.6428,
"step": 2
},
{
"epoch": 1.0,
"learning_rate": 2.9996710252122685e-05,
"loss": 2.1838,
"step": 4
},
{
"epoch": 1.5,
"learning_rate": 2.9992598405485974e-05,
"loss": 1.8524,
"step": 6
},
{
"epoch": 2.0,
"learning_rate": 2.9986842451482876e-05,
"loss": 1.59,
"step": 8
},
{
"epoch": 2.5,
"learning_rate": 2.9979443021318607e-05,
"loss": 1.4015,
"step": 10
},
{
"epoch": 3.0,
"learning_rate": 2.9970400926424075e-05,
"loss": 1.2128,
"step": 12
},
{
"epoch": 3.5,
"learning_rate": 2.995971715836687e-05,
"loss": 1.1418,
"step": 14
},
{
"epoch": 4.0,
"learning_rate": 2.9947392888742566e-05,
"loss": 1.0389,
"step": 16
},
{
"epoch": 4.5,
"learning_rate": 2.9933429469046202e-05,
"loss": 1.0268,
"step": 18
},
{
"epoch": 5.0,
"learning_rate": 2.99178284305241e-05,
"loss": 0.9734,
"step": 20
},
{
"epoch": 5.5,
"learning_rate": 2.9900591484005944e-05,
"loss": 0.9755,
"step": 22
},
{
"epoch": 6.0,
"learning_rate": 2.988172051971717e-05,
"loss": 0.9142,
"step": 24
},
{
"epoch": 6.5,
"learning_rate": 2.9861217607071655e-05,
"loss": 0.8872,
"step": 26
},
{
"epoch": 7.0,
"learning_rate": 2.983908499444483e-05,
"loss": 0.8633,
"step": 28
},
{
"epoch": 7.5,
"learning_rate": 2.981532510892707e-05,
"loss": 0.8248,
"step": 30
},
{
"epoch": 8.0,
"learning_rate": 2.9789940556057574e-05,
"loss": 0.8352,
"step": 32
},
{
"epoch": 8.5,
"learning_rate": 2.9762934119538628e-05,
"loss": 0.8017,
"step": 34
},
{
"epoch": 9.0,
"learning_rate": 2.9734308760930333e-05,
"loss": 0.7705,
"step": 36
},
{
"epoch": 9.5,
"learning_rate": 2.9704067619325828e-05,
"loss": 0.7619,
"step": 38
},
{
"epoch": 10.0,
"learning_rate": 2.9672214011007087e-05,
"loss": 0.7593,
"step": 40
},
{
"epoch": 10.5,
"learning_rate": 2.9638751429081213e-05,
"loss": 0.7469,
"step": 42
},
{
"epoch": 11.0,
"learning_rate": 2.9603683543097406e-05,
"loss": 0.7477,
"step": 44
},
{
"epoch": 11.5,
"learning_rate": 2.9567014198644542e-05,
"loss": 0.716,
"step": 46
},
{
"epoch": 12.0,
"learning_rate": 2.9528747416929467e-05,
"loss": 0.7351,
"step": 48
},
{
"epoch": 12.5,
"learning_rate": 2.9488887394336025e-05,
"loss": 0.72,
"step": 50
},
{
"epoch": 13.0,
"learning_rate": 2.9447438501964873e-05,
"loss": 0.714,
"step": 52
},
{
"epoch": 13.5,
"learning_rate": 2.9404405285154146e-05,
"loss": 0.6994,
"step": 54
},
{
"epoch": 14.0,
"learning_rate": 2.9359792462981007e-05,
"loss": 0.7064,
"step": 56
},
{
"epoch": 14.5,
"learning_rate": 2.9313604927744153e-05,
"loss": 0.6807,
"step": 58
},
{
"epoch": 15.0,
"learning_rate": 2.9265847744427305e-05,
"loss": 0.6969,
"step": 60
},
{
"epoch": 15.5,
"learning_rate": 2.9216526150143788e-05,
"loss": 0.6836,
"step": 62
},
{
"epoch": 16.0,
"learning_rate": 2.9165645553562215e-05,
"loss": 0.6557,
"step": 64
},
{
"epoch": 16.5,
"learning_rate": 2.9113211534313385e-05,
"loss": 0.6619,
"step": 66
},
{
"epoch": 17.0,
"learning_rate": 2.9059229842378373e-05,
"loss": 0.6496,
"step": 68
},
{
"epoch": 17.5,
"learning_rate": 2.9003706397458025e-05,
"loss": 0.6268,
"step": 70
},
{
"epoch": 18.0,
"learning_rate": 2.894664728832377e-05,
"loss": 0.6586,
"step": 72
},
{
"epoch": 18.5,
"learning_rate": 2.8888058772149923e-05,
"loss": 0.6197,
"step": 74
},
{
"epoch": 19.0,
"learning_rate": 2.8827947273827508e-05,
"loss": 0.638,
"step": 76
},
{
"epoch": 19.5,
"learning_rate": 2.8766319385259717e-05,
"loss": 0.6093,
"step": 78
},
{
"epoch": 20.0,
"learning_rate": 2.8703181864639013e-05,
"loss": 0.6089,
"step": 80
},
{
"epoch": 20.5,
"learning_rate": 2.863854163570603e-05,
"loss": 0.6108,
"step": 82
},
{
"epoch": 21.0,
"learning_rate": 2.8572405786990293e-05,
"loss": 0.5776,
"step": 84
},
{
"epoch": 21.5,
"learning_rate": 2.8504781571032906e-05,
"loss": 0.5776,
"step": 86
},
{
"epoch": 22.0,
"learning_rate": 2.8435676403591193e-05,
"loss": 0.5997,
"step": 88
},
{
"epoch": 22.5,
"learning_rate": 2.8365097862825516e-05,
"loss": 0.5579,
"step": 90
},
{
"epoch": 23.0,
"learning_rate": 2.829305368846822e-05,
"loss": 0.579,
"step": 92
},
{
"epoch": 23.5,
"learning_rate": 2.821955178097488e-05,
"loss": 0.5689,
"step": 94
},
{
"epoch": 24.0,
"learning_rate": 2.8144600200657953e-05,
"loss": 0.5278,
"step": 96
},
{
"epoch": 24.5,
"learning_rate": 2.8068207166802843e-05,
"loss": 0.55,
"step": 98
},
{
"epoch": 25.0,
"learning_rate": 2.7990381056766583e-05,
"loss": 0.516,
"step": 100
}
],
"logging_steps": 2,
"max_steps": 600,
"num_input_tokens_seen": 0,
"num_train_epochs": 150,
"save_steps": 500,
"total_flos": 4764264824832000.0,
"train_batch_size": 52,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-104/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-104/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-104/scheduler.pt (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,43 @@
{
"add_bos_token": false,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,333 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 26.0,
"eval_steps": 5,
"global_step": 104,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.5,
"learning_rate": 2.9999177540482684e-05,
"loss": 2.6428,
"step": 2
},
{
"epoch": 1.0,
"learning_rate": 2.9996710252122685e-05,
"loss": 2.1838,
"step": 4
},
{
"epoch": 1.5,
"learning_rate": 2.9992598405485974e-05,
"loss": 1.8524,
"step": 6
},
{
"epoch": 2.0,
"learning_rate": 2.9986842451482876e-05,
"loss": 1.59,
"step": 8
},
{
"epoch": 2.5,
"learning_rate": 2.9979443021318607e-05,
"loss": 1.4015,
"step": 10
},
{
"epoch": 3.0,
"learning_rate": 2.9970400926424075e-05,
"loss": 1.2128,
"step": 12
},
{
"epoch": 3.5,
"learning_rate": 2.995971715836687e-05,
"loss": 1.1418,
"step": 14
},
{
"epoch": 4.0,
"learning_rate": 2.9947392888742566e-05,
"loss": 1.0389,
"step": 16
},
{
"epoch": 4.5,
"learning_rate": 2.9933429469046202e-05,
"loss": 1.0268,
"step": 18
},
{
"epoch": 5.0,
"learning_rate": 2.99178284305241e-05,
"loss": 0.9734,
"step": 20
},
{
"epoch": 5.5,
"learning_rate": 2.9900591484005944e-05,
"loss": 0.9755,
"step": 22
},
{
"epoch": 6.0,
"learning_rate": 2.988172051971717e-05,
"loss": 0.9142,
"step": 24
},
{
"epoch": 6.5,
"learning_rate": 2.9861217607071655e-05,
"loss": 0.8872,
"step": 26
},
{
"epoch": 7.0,
"learning_rate": 2.983908499444483e-05,
"loss": 0.8633,
"step": 28
},
{
"epoch": 7.5,
"learning_rate": 2.981532510892707e-05,
"loss": 0.8248,
"step": 30
},
{
"epoch": 8.0,
"learning_rate": 2.9789940556057574e-05,
"loss": 0.8352,
"step": 32
},
{
"epoch": 8.5,
"learning_rate": 2.9762934119538628e-05,
"loss": 0.8017,
"step": 34
},
{
"epoch": 9.0,
"learning_rate": 2.9734308760930333e-05,
"loss": 0.7705,
"step": 36
},
{
"epoch": 9.5,
"learning_rate": 2.9704067619325828e-05,
"loss": 0.7619,
"step": 38
},
{
"epoch": 10.0,
"learning_rate": 2.9672214011007087e-05,
"loss": 0.7593,
"step": 40
},
{
"epoch": 10.5,
"learning_rate": 2.9638751429081213e-05,
"loss": 0.7469,
"step": 42
},
{
"epoch": 11.0,
"learning_rate": 2.9603683543097406e-05,
"loss": 0.7477,
"step": 44
},
{
"epoch": 11.5,
"learning_rate": 2.9567014198644542e-05,
"loss": 0.716,
"step": 46
},
{
"epoch": 12.0,
"learning_rate": 2.9528747416929467e-05,
"loss": 0.7351,
"step": 48
},
{
"epoch": 12.5,
"learning_rate": 2.9488887394336025e-05,
"loss": 0.72,
"step": 50
},
{
"epoch": 13.0,
"learning_rate": 2.9447438501964873e-05,
"loss": 0.714,
"step": 52
},
{
"epoch": 13.5,
"learning_rate": 2.9404405285154146e-05,
"loss": 0.6994,
"step": 54
},
{
"epoch": 14.0,
"learning_rate": 2.9359792462981007e-05,
"loss": 0.7064,
"step": 56
},
{
"epoch": 14.5,
"learning_rate": 2.9313604927744153e-05,
"loss": 0.6807,
"step": 58
},
{
"epoch": 15.0,
"learning_rate": 2.9265847744427305e-05,
"loss": 0.6969,
"step": 60
},
{
"epoch": 15.5,
"learning_rate": 2.9216526150143788e-05,
"loss": 0.6836,
"step": 62
},
{
"epoch": 16.0,
"learning_rate": 2.9165645553562215e-05,
"loss": 0.6557,
"step": 64
},
{
"epoch": 16.5,
"learning_rate": 2.9113211534313385e-05,
"loss": 0.6619,
"step": 66
},
{
"epoch": 17.0,
"learning_rate": 2.9059229842378373e-05,
"loss": 0.6496,
"step": 68
},
{
"epoch": 17.5,
"learning_rate": 2.9003706397458025e-05,
"loss": 0.6268,
"step": 70
},
{
"epoch": 18.0,
"learning_rate": 2.894664728832377e-05,
"loss": 0.6586,
"step": 72
},
{
"epoch": 18.5,
"learning_rate": 2.8888058772149923e-05,
"loss": 0.6197,
"step": 74
},
{
"epoch": 19.0,
"learning_rate": 2.8827947273827508e-05,
"loss": 0.638,
"step": 76
},
{
"epoch": 19.5,
"learning_rate": 2.8766319385259717e-05,
"loss": 0.6093,
"step": 78
},
{
"epoch": 20.0,
"learning_rate": 2.8703181864639013e-05,
"loss": 0.6089,
"step": 80
},
{
"epoch": 20.5,
"learning_rate": 2.863854163570603e-05,
"loss": 0.6108,
"step": 82
},
{
"epoch": 21.0,
"learning_rate": 2.8572405786990293e-05,
"loss": 0.5776,
"step": 84
},
{
"epoch": 21.5,
"learning_rate": 2.8504781571032906e-05,
"loss": 0.5776,
"step": 86
},
{
"epoch": 22.0,
"learning_rate": 2.8435676403591193e-05,
"loss": 0.5997,
"step": 88
},
{
"epoch": 22.5,
"learning_rate": 2.8365097862825516e-05,
"loss": 0.5579,
"step": 90
},
{
"epoch": 23.0,
"learning_rate": 2.829305368846822e-05,
"loss": 0.579,
"step": 92
},
{
"epoch": 23.5,
"learning_rate": 2.821955178097488e-05,
"loss": 0.5689,
"step": 94
},
{
"epoch": 24.0,
"learning_rate": 2.8144600200657953e-05,
"loss": 0.5278,
"step": 96
},
{
"epoch": 24.5,
"learning_rate": 2.8068207166802843e-05,
"loss": 0.55,
"step": 98
},
{
"epoch": 25.0,
"learning_rate": 2.7990381056766583e-05,
"loss": 0.516,
"step": 100
},
{
"epoch": 25.5,
"learning_rate": 2.7911130405059155e-05,
"loss": 0.5342,
"step": 102
},
{
"epoch": 26.0,
"learning_rate": 2.78304639024076e-05,
"loss": 0.5021,
"step": 104
}
],
"logging_steps": 2,
"max_steps": 600,
"num_input_tokens_seen": 0,
"num_train_epochs": 150,
"save_steps": 500,
"total_flos": 4954835417825280.0,
"train_batch_size": 52,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-108/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-108/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-108/scheduler.pt (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,43 @@
{
"add_bos_token": false,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,345 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 27.0,
"eval_steps": 5,
"global_step": 108,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.5,
"learning_rate": 2.9999177540482684e-05,
"loss": 2.6428,
"step": 2
},
{
"epoch": 1.0,
"learning_rate": 2.9996710252122685e-05,
"loss": 2.1838,
"step": 4
},
{
"epoch": 1.5,
"learning_rate": 2.9992598405485974e-05,
"loss": 1.8524,
"step": 6
},
{
"epoch": 2.0,
"learning_rate": 2.9986842451482876e-05,
"loss": 1.59,
"step": 8
},
{
"epoch": 2.5,
"learning_rate": 2.9979443021318607e-05,
"loss": 1.4015,
"step": 10
},
{
"epoch": 3.0,
"learning_rate": 2.9970400926424075e-05,
"loss": 1.2128,
"step": 12
},
{
"epoch": 3.5,
"learning_rate": 2.995971715836687e-05,
"loss": 1.1418,
"step": 14
},
{
"epoch": 4.0,
"learning_rate": 2.9947392888742566e-05,
"loss": 1.0389,
"step": 16
},
{
"epoch": 4.5,
"learning_rate": 2.9933429469046202e-05,
"loss": 1.0268,
"step": 18
},
{
"epoch": 5.0,
"learning_rate": 2.99178284305241e-05,
"loss": 0.9734,
"step": 20
},
{
"epoch": 5.5,
"learning_rate": 2.9900591484005944e-05,
"loss": 0.9755,
"step": 22
},
{
"epoch": 6.0,
"learning_rate": 2.988172051971717e-05,
"loss": 0.9142,
"step": 24
},
{
"epoch": 6.5,
"learning_rate": 2.9861217607071655e-05,
"loss": 0.8872,
"step": 26
},
{
"epoch": 7.0,
"learning_rate": 2.983908499444483e-05,
"loss": 0.8633,
"step": 28
},
{
"epoch": 7.5,
"learning_rate": 2.981532510892707e-05,
"loss": 0.8248,
"step": 30
},
{
"epoch": 8.0,
"learning_rate": 2.9789940556057574e-05,
"loss": 0.8352,
"step": 32
},
{
"epoch": 8.5,
"learning_rate": 2.9762934119538628e-05,
"loss": 0.8017,
"step": 34
},
{
"epoch": 9.0,
"learning_rate": 2.9734308760930333e-05,
"loss": 0.7705,
"step": 36
},
{
"epoch": 9.5,
"learning_rate": 2.9704067619325828e-05,
"loss": 0.7619,
"step": 38
},
{
"epoch": 10.0,
"learning_rate": 2.9672214011007087e-05,
"loss": 0.7593,
"step": 40
},
{
"epoch": 10.5,
"learning_rate": 2.9638751429081213e-05,
"loss": 0.7469,
"step": 42
},
{
"epoch": 11.0,
"learning_rate": 2.9603683543097406e-05,
"loss": 0.7477,
"step": 44
},
{
"epoch": 11.5,
"learning_rate": 2.9567014198644542e-05,
"loss": 0.716,
"step": 46
},
{
"epoch": 12.0,
"learning_rate": 2.9528747416929467e-05,
"loss": 0.7351,
"step": 48
},
{
"epoch": 12.5,
"learning_rate": 2.9488887394336025e-05,
"loss": 0.72,
"step": 50
},
{
"epoch": 13.0,
"learning_rate": 2.9447438501964873e-05,
"loss": 0.714,
"step": 52
},
{
"epoch": 13.5,
"learning_rate": 2.9404405285154146e-05,
"loss": 0.6994,
"step": 54
},
{
"epoch": 14.0,
"learning_rate": 2.9359792462981007e-05,
"loss": 0.7064,
"step": 56
},
{
"epoch": 14.5,
"learning_rate": 2.9313604927744153e-05,
"loss": 0.6807,
"step": 58
},
{
"epoch": 15.0,
"learning_rate": 2.9265847744427305e-05,
"loss": 0.6969,
"step": 60
},
{
"epoch": 15.5,
"learning_rate": 2.9216526150143788e-05,
"loss": 0.6836,
"step": 62
},
{
"epoch": 16.0,
"learning_rate": 2.9165645553562215e-05,
"loss": 0.6557,
"step": 64
},
{
"epoch": 16.5,
"learning_rate": 2.9113211534313385e-05,
"loss": 0.6619,
"step": 66
},
{
"epoch": 17.0,
"learning_rate": 2.9059229842378373e-05,
"loss": 0.6496,
"step": 68
},
{
"epoch": 17.5,
"learning_rate": 2.9003706397458025e-05,
"loss": 0.6268,
"step": 70
},
{
"epoch": 18.0,
"learning_rate": 2.894664728832377e-05,
"loss": 0.6586,
"step": 72
},
{
"epoch": 18.5,
"learning_rate": 2.8888058772149923e-05,
"loss": 0.6197,
"step": 74
},
{
"epoch": 19.0,
"learning_rate": 2.8827947273827508e-05,
"loss": 0.638,
"step": 76
},
{
"epoch": 19.5,
"learning_rate": 2.8766319385259717e-05,
"loss": 0.6093,
"step": 78
},
{
"epoch": 20.0,
"learning_rate": 2.8703181864639013e-05,
"loss": 0.6089,
"step": 80
},
{
"epoch": 20.5,
"learning_rate": 2.863854163570603e-05,
"loss": 0.6108,
"step": 82
},
{
"epoch": 21.0,
"learning_rate": 2.8572405786990293e-05,
"loss": 0.5776,
"step": 84
},
{
"epoch": 21.5,
"learning_rate": 2.8504781571032906e-05,
"loss": 0.5776,
"step": 86
},
{
"epoch": 22.0,
"learning_rate": 2.8435676403591193e-05,
"loss": 0.5997,
"step": 88
},
{
"epoch": 22.5,
"learning_rate": 2.8365097862825516e-05,
"loss": 0.5579,
"step": 90
},
{
"epoch": 23.0,
"learning_rate": 2.829305368846822e-05,
"loss": 0.579,
"step": 92
},
{
"epoch": 23.5,
"learning_rate": 2.821955178097488e-05,
"loss": 0.5689,
"step": 94
},
{
"epoch": 24.0,
"learning_rate": 2.8144600200657953e-05,
"loss": 0.5278,
"step": 96
},
{
"epoch": 24.5,
"learning_rate": 2.8068207166802843e-05,
"loss": 0.55,
"step": 98
},
{
"epoch": 25.0,
"learning_rate": 2.7990381056766583e-05,
"loss": 0.516,
"step": 100
},
{
"epoch": 25.5,
"learning_rate": 2.7911130405059155e-05,
"loss": 0.5342,
"step": 102
},
{
"epoch": 26.0,
"learning_rate": 2.78304639024076e-05,
"loss": 0.5021,
"step": 104
},
{
"epoch": 26.5,
"learning_rate": 2.774839039480296e-05,
"loss": 0.5042,
"step": 106
},
{
"epoch": 27.0,
"learning_rate": 2.7664918882530227e-05,
"loss": 0.4999,
"step": 108
}
],
"logging_steps": 2,
"max_steps": 600,
"num_input_tokens_seen": 0,
"num_train_epochs": 150,
"save_steps": 500,
"total_flos": 5145406010818560.0,
"train_batch_size": 52,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-112/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-112/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-112/scheduler.pt (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,43 @@
{
"add_bos_token": false,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,357 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 28.0,
"eval_steps": 5,
"global_step": 112,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.5,
"learning_rate": 2.9999177540482684e-05,
"loss": 2.6428,
"step": 2
},
{
"epoch": 1.0,
"learning_rate": 2.9996710252122685e-05,
"loss": 2.1838,
"step": 4
},
{
"epoch": 1.5,
"learning_rate": 2.9992598405485974e-05,
"loss": 1.8524,
"step": 6
},
{
"epoch": 2.0,
"learning_rate": 2.9986842451482876e-05,
"loss": 1.59,
"step": 8
},
{
"epoch": 2.5,
"learning_rate": 2.9979443021318607e-05,
"loss": 1.4015,
"step": 10
},
{
"epoch": 3.0,
"learning_rate": 2.9970400926424075e-05,
"loss": 1.2128,
"step": 12
},
{
"epoch": 3.5,
"learning_rate": 2.995971715836687e-05,
"loss": 1.1418,
"step": 14
},
{
"epoch": 4.0,
"learning_rate": 2.9947392888742566e-05,
"loss": 1.0389,
"step": 16
},
{
"epoch": 4.5,
"learning_rate": 2.9933429469046202e-05,
"loss": 1.0268,
"step": 18
},
{
"epoch": 5.0,
"learning_rate": 2.99178284305241e-05,
"loss": 0.9734,
"step": 20
},
{
"epoch": 5.5,
"learning_rate": 2.9900591484005944e-05,
"loss": 0.9755,
"step": 22
},
{
"epoch": 6.0,
"learning_rate": 2.988172051971717e-05,
"loss": 0.9142,
"step": 24
},
{
"epoch": 6.5,
"learning_rate": 2.9861217607071655e-05,
"loss": 0.8872,
"step": 26
},
{
"epoch": 7.0,
"learning_rate": 2.983908499444483e-05,
"loss": 0.8633,
"step": 28
},
{
"epoch": 7.5,
"learning_rate": 2.981532510892707e-05,
"loss": 0.8248,
"step": 30
},
{
"epoch": 8.0,
"learning_rate": 2.9789940556057574e-05,
"loss": 0.8352,
"step": 32
},
{
"epoch": 8.5,
"learning_rate": 2.9762934119538628e-05,
"loss": 0.8017,
"step": 34
},
{
"epoch": 9.0,
"learning_rate": 2.9734308760930333e-05,
"loss": 0.7705,
"step": 36
},
{
"epoch": 9.5,
"learning_rate": 2.9704067619325828e-05,
"loss": 0.7619,
"step": 38
},
{
"epoch": 10.0,
"learning_rate": 2.9672214011007087e-05,
"loss": 0.7593,
"step": 40
},
{
"epoch": 10.5,
"learning_rate": 2.9638751429081213e-05,
"loss": 0.7469,
"step": 42
},
{
"epoch": 11.0,
"learning_rate": 2.9603683543097406e-05,
"loss": 0.7477,
"step": 44
},
{
"epoch": 11.5,
"learning_rate": 2.9567014198644542e-05,
"loss": 0.716,
"step": 46
},
{
"epoch": 12.0,
"learning_rate": 2.9528747416929467e-05,
"loss": 0.7351,
"step": 48
},
{
"epoch": 12.5,
"learning_rate": 2.9488887394336025e-05,
"loss": 0.72,
"step": 50
},
{
"epoch": 13.0,
"learning_rate": 2.9447438501964873e-05,
"loss": 0.714,
"step": 52
},
{
"epoch": 13.5,
"learning_rate": 2.9404405285154146e-05,
"loss": 0.6994,
"step": 54
},
{
"epoch": 14.0,
"learning_rate": 2.9359792462981007e-05,
"loss": 0.7064,
"step": 56
},
{
"epoch": 14.5,
"learning_rate": 2.9313604927744153e-05,
"loss": 0.6807,
"step": 58
},
{
"epoch": 15.0,
"learning_rate": 2.9265847744427305e-05,
"loss": 0.6969,
"step": 60
},
{
"epoch": 15.5,
"learning_rate": 2.9216526150143788e-05,
"loss": 0.6836,
"step": 62
},
{
"epoch": 16.0,
"learning_rate": 2.9165645553562215e-05,
"loss": 0.6557,
"step": 64
},
{
"epoch": 16.5,
"learning_rate": 2.9113211534313385e-05,
"loss": 0.6619,
"step": 66
},
{
"epoch": 17.0,
"learning_rate": 2.9059229842378373e-05,
"loss": 0.6496,
"step": 68
},
{
"epoch": 17.5,
"learning_rate": 2.9003706397458025e-05,
"loss": 0.6268,
"step": 70
},
{
"epoch": 18.0,
"learning_rate": 2.894664728832377e-05,
"loss": 0.6586,
"step": 72
},
{
"epoch": 18.5,
"learning_rate": 2.8888058772149923e-05,
"loss": 0.6197,
"step": 74
},
{
"epoch": 19.0,
"learning_rate": 2.8827947273827508e-05,
"loss": 0.638,
"step": 76
},
{
"epoch": 19.5,
"learning_rate": 2.8766319385259717e-05,
"loss": 0.6093,
"step": 78
},
{
"epoch": 20.0,
"learning_rate": 2.8703181864639013e-05,
"loss": 0.6089,
"step": 80
},
{
"epoch": 20.5,
"learning_rate": 2.863854163570603e-05,
"loss": 0.6108,
"step": 82
},
{
"epoch": 21.0,
"learning_rate": 2.8572405786990293e-05,
"loss": 0.5776,
"step": 84
},
{
"epoch": 21.5,
"learning_rate": 2.8504781571032906e-05,
"loss": 0.5776,
"step": 86
},
{
"epoch": 22.0,
"learning_rate": 2.8435676403591193e-05,
"loss": 0.5997,
"step": 88
},
{
"epoch": 22.5,
"learning_rate": 2.8365097862825516e-05,
"loss": 0.5579,
"step": 90
},
{
"epoch": 23.0,
"learning_rate": 2.829305368846822e-05,
"loss": 0.579,
"step": 92
},
{
"epoch": 23.5,
"learning_rate": 2.821955178097488e-05,
"loss": 0.5689,
"step": 94
},
{
"epoch": 24.0,
"learning_rate": 2.8144600200657953e-05,
"loss": 0.5278,
"step": 96
},
{
"epoch": 24.5,
"learning_rate": 2.8068207166802843e-05,
"loss": 0.55,
"step": 98
},
{
"epoch": 25.0,
"learning_rate": 2.7990381056766583e-05,
"loss": 0.516,
"step": 100
},
{
"epoch": 25.5,
"learning_rate": 2.7911130405059155e-05,
"loss": 0.5342,
"step": 102
},
{
"epoch": 26.0,
"learning_rate": 2.78304639024076e-05,
"loss": 0.5021,
"step": 104
},
{
"epoch": 26.5,
"learning_rate": 2.774839039480296e-05,
"loss": 0.5042,
"step": 106
},
{
"epoch": 27.0,
"learning_rate": 2.7664918882530227e-05,
"loss": 0.4999,
"step": 108
},
{
"epoch": 27.5,
"learning_rate": 2.7580058519181363e-05,
"loss": 0.4847,
"step": 110
},
{
"epoch": 28.0,
"learning_rate": 2.7493818610651493e-05,
"loss": 0.467,
"step": 112
}
],
"logging_steps": 2,
"max_steps": 600,
"num_input_tokens_seen": 0,
"num_train_epochs": 150,
"save_steps": 500,
"total_flos": 5335976603811840.0,
"train_batch_size": 52,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-116/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-116/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-116/scheduler.pt (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,43 @@
{
"add_bos_token": false,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,369 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 29.0,
"eval_steps": 5,
"global_step": 116,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.5,
"learning_rate": 2.9999177540482684e-05,
"loss": 2.6428,
"step": 2
},
{
"epoch": 1.0,
"learning_rate": 2.9996710252122685e-05,
"loss": 2.1838,
"step": 4
},
{
"epoch": 1.5,
"learning_rate": 2.9992598405485974e-05,
"loss": 1.8524,
"step": 6
},
{
"epoch": 2.0,
"learning_rate": 2.9986842451482876e-05,
"loss": 1.59,
"step": 8
},
{
"epoch": 2.5,
"learning_rate": 2.9979443021318607e-05,
"loss": 1.4015,
"step": 10
},
{
"epoch": 3.0,
"learning_rate": 2.9970400926424075e-05,
"loss": 1.2128,
"step": 12
},
{
"epoch": 3.5,
"learning_rate": 2.995971715836687e-05,
"loss": 1.1418,
"step": 14
},
{
"epoch": 4.0,
"learning_rate": 2.9947392888742566e-05,
"loss": 1.0389,
"step": 16
},
{
"epoch": 4.5,
"learning_rate": 2.9933429469046202e-05,
"loss": 1.0268,
"step": 18
},
{
"epoch": 5.0,
"learning_rate": 2.99178284305241e-05,
"loss": 0.9734,
"step": 20
},
{
"epoch": 5.5,
"learning_rate": 2.9900591484005944e-05,
"loss": 0.9755,
"step": 22
},
{
"epoch": 6.0,
"learning_rate": 2.988172051971717e-05,
"loss": 0.9142,
"step": 24
},
{
"epoch": 6.5,
"learning_rate": 2.9861217607071655e-05,
"loss": 0.8872,
"step": 26
},
{
"epoch": 7.0,
"learning_rate": 2.983908499444483e-05,
"loss": 0.8633,
"step": 28
},
{
"epoch": 7.5,
"learning_rate": 2.981532510892707e-05,
"loss": 0.8248,
"step": 30
},
{
"epoch": 8.0,
"learning_rate": 2.9789940556057574e-05,
"loss": 0.8352,
"step": 32
},
{
"epoch": 8.5,
"learning_rate": 2.9762934119538628e-05,
"loss": 0.8017,
"step": 34
},
{
"epoch": 9.0,
"learning_rate": 2.9734308760930333e-05,
"loss": 0.7705,
"step": 36
},
{
"epoch": 9.5,
"learning_rate": 2.9704067619325828e-05,
"loss": 0.7619,
"step": 38
},
{
"epoch": 10.0,
"learning_rate": 2.9672214011007087e-05,
"loss": 0.7593,
"step": 40
},
{
"epoch": 10.5,
"learning_rate": 2.9638751429081213e-05,
"loss": 0.7469,
"step": 42
},
{
"epoch": 11.0,
"learning_rate": 2.9603683543097406e-05,
"loss": 0.7477,
"step": 44
},
{
"epoch": 11.5,
"learning_rate": 2.9567014198644542e-05,
"loss": 0.716,
"step": 46
},
{
"epoch": 12.0,
"learning_rate": 2.9528747416929467e-05,
"loss": 0.7351,
"step": 48
},
{
"epoch": 12.5,
"learning_rate": 2.9488887394336025e-05,
"loss": 0.72,
"step": 50
},
{
"epoch": 13.0,
"learning_rate": 2.9447438501964873e-05,
"loss": 0.714,
"step": 52
},
{
"epoch": 13.5,
"learning_rate": 2.9404405285154146e-05,
"loss": 0.6994,
"step": 54
},
{
"epoch": 14.0,
"learning_rate": 2.9359792462981007e-05,
"loss": 0.7064,
"step": 56
},
{
"epoch": 14.5,
"learning_rate": 2.9313604927744153e-05,
"loss": 0.6807,
"step": 58
},
{
"epoch": 15.0,
"learning_rate": 2.9265847744427305e-05,
"loss": 0.6969,
"step": 60
},
{
"epoch": 15.5,
"learning_rate": 2.9216526150143788e-05,
"loss": 0.6836,
"step": 62
},
{
"epoch": 16.0,
"learning_rate": 2.9165645553562215e-05,
"loss": 0.6557,
"step": 64
},
{
"epoch": 16.5,
"learning_rate": 2.9113211534313385e-05,
"loss": 0.6619,
"step": 66
},
{
"epoch": 17.0,
"learning_rate": 2.9059229842378373e-05,
"loss": 0.6496,
"step": 68
},
{
"epoch": 17.5,
"learning_rate": 2.9003706397458025e-05,
"loss": 0.6268,
"step": 70
},
{
"epoch": 18.0,
"learning_rate": 2.894664728832377e-05,
"loss": 0.6586,
"step": 72
},
{
"epoch": 18.5,
"learning_rate": 2.8888058772149923e-05,
"loss": 0.6197,
"step": 74
},
{
"epoch": 19.0,
"learning_rate": 2.8827947273827508e-05,
"loss": 0.638,
"step": 76
},
{
"epoch": 19.5,
"learning_rate": 2.8766319385259717e-05,
"loss": 0.6093,
"step": 78
},
{
"epoch": 20.0,
"learning_rate": 2.8703181864639013e-05,
"loss": 0.6089,
"step": 80
},
{
"epoch": 20.5,
"learning_rate": 2.863854163570603e-05,
"loss": 0.6108,
"step": 82
},
{
"epoch": 21.0,
"learning_rate": 2.8572405786990293e-05,
"loss": 0.5776,
"step": 84
},
{
"epoch": 21.5,
"learning_rate": 2.8504781571032906e-05,
"loss": 0.5776,
"step": 86
},
{
"epoch": 22.0,
"learning_rate": 2.8435676403591193e-05,
"loss": 0.5997,
"step": 88
},
{
"epoch": 22.5,
"learning_rate": 2.8365097862825516e-05,
"loss": 0.5579,
"step": 90
},
{
"epoch": 23.0,
"learning_rate": 2.829305368846822e-05,
"loss": 0.579,
"step": 92
},
{
"epoch": 23.5,
"learning_rate": 2.821955178097488e-05,
"loss": 0.5689,
"step": 94
},
{
"epoch": 24.0,
"learning_rate": 2.8144600200657953e-05,
"loss": 0.5278,
"step": 96
},
{
"epoch": 24.5,
"learning_rate": 2.8068207166802843e-05,
"loss": 0.55,
"step": 98
},
{
"epoch": 25.0,
"learning_rate": 2.7990381056766583e-05,
"loss": 0.516,
"step": 100
},
{
"epoch": 25.5,
"learning_rate": 2.7911130405059155e-05,
"loss": 0.5342,
"step": 102
},
{
"epoch": 26.0,
"learning_rate": 2.78304639024076e-05,
"loss": 0.5021,
"step": 104
},
{
"epoch": 26.5,
"learning_rate": 2.774839039480296e-05,
"loss": 0.5042,
"step": 106
},
{
"epoch": 27.0,
"learning_rate": 2.7664918882530227e-05,
"loss": 0.4999,
"step": 108
},
{
"epoch": 27.5,
"learning_rate": 2.7580058519181363e-05,
"loss": 0.4847,
"step": 110
},
{
"epoch": 28.0,
"learning_rate": 2.7493818610651493e-05,
"loss": 0.467,
"step": 112
},
{
"epoch": 28.5,
"learning_rate": 2.7406208614118427e-05,
"loss": 0.462,
"step": 114
},
{
"epoch": 29.0,
"learning_rate": 2.731723813700556e-05,
"loss": 0.4576,
"step": 116
}
],
"logging_steps": 2,
"max_steps": 600,
"num_input_tokens_seen": 0,
"num_train_epochs": 150,
"save_steps": 500,
"total_flos": 5526547196805120.0,
"train_batch_size": 52,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-12/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-12/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-12/scheduler.pt (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,43 @@
{
"add_bos_token": false,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,57 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 3.0,
"eval_steps": 5,
"global_step": 12,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.5,
"learning_rate": 2.9999177540482684e-05,
"loss": 2.6428,
"step": 2
},
{
"epoch": 1.0,
"learning_rate": 2.9996710252122685e-05,
"loss": 2.1838,
"step": 4
},
{
"epoch": 1.5,
"learning_rate": 2.9992598405485974e-05,
"loss": 1.8524,
"step": 6
},
{
"epoch": 2.0,
"learning_rate": 2.9986842451482876e-05,
"loss": 1.59,
"step": 8
},
{
"epoch": 2.5,
"learning_rate": 2.9979443021318607e-05,
"loss": 1.4015,
"step": 10
},
{
"epoch": 3.0,
"learning_rate": 2.9970400926424075e-05,
"loss": 1.2128,
"step": 12
}
],
"logging_steps": 2,
"max_steps": 600,
"num_input_tokens_seen": 0,
"num_train_epochs": 150,
"save_steps": 500,
"total_flos": 571711778979840.0,
"train_batch_size": 52,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-120/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-120/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-120/scheduler.pt (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,43 @@
{
"add_bos_token": false,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,381 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 30.0,
"eval_steps": 5,
"global_step": 120,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.5,
"learning_rate": 2.9999177540482684e-05,
"loss": 2.6428,
"step": 2
},
{
"epoch": 1.0,
"learning_rate": 2.9996710252122685e-05,
"loss": 2.1838,
"step": 4
},
{
"epoch": 1.5,
"learning_rate": 2.9992598405485974e-05,
"loss": 1.8524,
"step": 6
},
{
"epoch": 2.0,
"learning_rate": 2.9986842451482876e-05,
"loss": 1.59,
"step": 8
},
{
"epoch": 2.5,
"learning_rate": 2.9979443021318607e-05,
"loss": 1.4015,
"step": 10
},
{
"epoch": 3.0,
"learning_rate": 2.9970400926424075e-05,
"loss": 1.2128,
"step": 12
},
{
"epoch": 3.5,
"learning_rate": 2.995971715836687e-05,
"loss": 1.1418,
"step": 14
},
{
"epoch": 4.0,
"learning_rate": 2.9947392888742566e-05,
"loss": 1.0389,
"step": 16
},
{
"epoch": 4.5,
"learning_rate": 2.9933429469046202e-05,
"loss": 1.0268,
"step": 18
},
{
"epoch": 5.0,
"learning_rate": 2.99178284305241e-05,
"loss": 0.9734,
"step": 20
},
{
"epoch": 5.5,
"learning_rate": 2.9900591484005944e-05,
"loss": 0.9755,
"step": 22
},
{
"epoch": 6.0,
"learning_rate": 2.988172051971717e-05,
"loss": 0.9142,
"step": 24
},
{
"epoch": 6.5,
"learning_rate": 2.9861217607071655e-05,
"loss": 0.8872,
"step": 26
},
{
"epoch": 7.0,
"learning_rate": 2.983908499444483e-05,
"loss": 0.8633,
"step": 28
},
{
"epoch": 7.5,
"learning_rate": 2.981532510892707e-05,
"loss": 0.8248,
"step": 30
},
{
"epoch": 8.0,
"learning_rate": 2.9789940556057574e-05,
"loss": 0.8352,
"step": 32
},
{
"epoch": 8.5,
"learning_rate": 2.9762934119538628e-05,
"loss": 0.8017,
"step": 34
},
{
"epoch": 9.0,
"learning_rate": 2.9734308760930333e-05,
"loss": 0.7705,
"step": 36
},
{
"epoch": 9.5,
"learning_rate": 2.9704067619325828e-05,
"loss": 0.7619,
"step": 38
},
{
"epoch": 10.0,
"learning_rate": 2.9672214011007087e-05,
"loss": 0.7593,
"step": 40
},
{
"epoch": 10.5,
"learning_rate": 2.9638751429081213e-05,
"loss": 0.7469,
"step": 42
},
{
"epoch": 11.0,
"learning_rate": 2.9603683543097406e-05,
"loss": 0.7477,
"step": 44
},
{
"epoch": 11.5,
"learning_rate": 2.9567014198644542e-05,
"loss": 0.716,
"step": 46
},
{
"epoch": 12.0,
"learning_rate": 2.9528747416929467e-05,
"loss": 0.7351,
"step": 48
},
{
"epoch": 12.5,
"learning_rate": 2.9488887394336025e-05,
"loss": 0.72,
"step": 50
},
{
"epoch": 13.0,
"learning_rate": 2.9447438501964873e-05,
"loss": 0.714,
"step": 52
},
{
"epoch": 13.5,
"learning_rate": 2.9404405285154146e-05,
"loss": 0.6994,
"step": 54
},
{
"epoch": 14.0,
"learning_rate": 2.9359792462981007e-05,
"loss": 0.7064,
"step": 56
},
{
"epoch": 14.5,
"learning_rate": 2.9313604927744153e-05,
"loss": 0.6807,
"step": 58
},
{
"epoch": 15.0,
"learning_rate": 2.9265847744427305e-05,
"loss": 0.6969,
"step": 60
},
{
"epoch": 15.5,
"learning_rate": 2.9216526150143788e-05,
"loss": 0.6836,
"step": 62
},
{
"epoch": 16.0,
"learning_rate": 2.9165645553562215e-05,
"loss": 0.6557,
"step": 64
},
{
"epoch": 16.5,
"learning_rate": 2.9113211534313385e-05,
"loss": 0.6619,
"step": 66
},
{
"epoch": 17.0,
"learning_rate": 2.9059229842378373e-05,
"loss": 0.6496,
"step": 68
},
{
"epoch": 17.5,
"learning_rate": 2.9003706397458025e-05,
"loss": 0.6268,
"step": 70
},
{
"epoch": 18.0,
"learning_rate": 2.894664728832377e-05,
"loss": 0.6586,
"step": 72
},
{
"epoch": 18.5,
"learning_rate": 2.8888058772149923e-05,
"loss": 0.6197,
"step": 74
},
{
"epoch": 19.0,
"learning_rate": 2.8827947273827508e-05,
"loss": 0.638,
"step": 76
},
{
"epoch": 19.5,
"learning_rate": 2.8766319385259717e-05,
"loss": 0.6093,
"step": 78
},
{
"epoch": 20.0,
"learning_rate": 2.8703181864639013e-05,
"loss": 0.6089,
"step": 80
},
{
"epoch": 20.5,
"learning_rate": 2.863854163570603e-05,
"loss": 0.6108,
"step": 82
},
{
"epoch": 21.0,
"learning_rate": 2.8572405786990293e-05,
"loss": 0.5776,
"step": 84
},
{
"epoch": 21.5,
"learning_rate": 2.8504781571032906e-05,
"loss": 0.5776,
"step": 86
},
{
"epoch": 22.0,
"learning_rate": 2.8435676403591193e-05,
"loss": 0.5997,
"step": 88
},
{
"epoch": 22.5,
"learning_rate": 2.8365097862825516e-05,
"loss": 0.5579,
"step": 90
},
{
"epoch": 23.0,
"learning_rate": 2.829305368846822e-05,
"loss": 0.579,
"step": 92
},
{
"epoch": 23.5,
"learning_rate": 2.821955178097488e-05,
"loss": 0.5689,
"step": 94
},
{
"epoch": 24.0,
"learning_rate": 2.8144600200657953e-05,
"loss": 0.5278,
"step": 96
},
{
"epoch": 24.5,
"learning_rate": 2.8068207166802843e-05,
"loss": 0.55,
"step": 98
},
{
"epoch": 25.0,
"learning_rate": 2.7990381056766583e-05,
"loss": 0.516,
"step": 100
},
{
"epoch": 25.5,
"learning_rate": 2.7911130405059155e-05,
"loss": 0.5342,
"step": 102
},
{
"epoch": 26.0,
"learning_rate": 2.78304639024076e-05,
"loss": 0.5021,
"step": 104
},
{
"epoch": 26.5,
"learning_rate": 2.774839039480296e-05,
"loss": 0.5042,
"step": 106
},
{
"epoch": 27.0,
"learning_rate": 2.7664918882530227e-05,
"loss": 0.4999,
"step": 108
},
{
"epoch": 27.5,
"learning_rate": 2.7580058519181363e-05,
"loss": 0.4847,
"step": 110
},
{
"epoch": 28.0,
"learning_rate": 2.7493818610651493e-05,
"loss": 0.467,
"step": 112
},
{
"epoch": 28.5,
"learning_rate": 2.7406208614118427e-05,
"loss": 0.462,
"step": 114
},
{
"epoch": 29.0,
"learning_rate": 2.731723813700556e-05,
"loss": 0.4576,
"step": 116
},
{
"epoch": 29.5,
"learning_rate": 2.7226916935928312e-05,
"loss": 0.45,
"step": 118
},
{
"epoch": 30.0,
"learning_rate": 2.7135254915624213e-05,
"loss": 0.4045,
"step": 120
}
],
"logging_steps": 2,
"max_steps": 600,
"num_input_tokens_seen": 0,
"num_train_epochs": 150,
"save_steps": 500,
"total_flos": 5717117789798400.0,
"train_batch_size": 52,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-124/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-124/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-124/scheduler.pt (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,43 @@
{
"add_bos_token": false,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,393 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 31.0,
"eval_steps": 5,
"global_step": 124,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.5,
"learning_rate": 2.9999177540482684e-05,
"loss": 2.6428,
"step": 2
},
{
"epoch": 1.0,
"learning_rate": 2.9996710252122685e-05,
"loss": 2.1838,
"step": 4
},
{
"epoch": 1.5,
"learning_rate": 2.9992598405485974e-05,
"loss": 1.8524,
"step": 6
},
{
"epoch": 2.0,
"learning_rate": 2.9986842451482876e-05,
"loss": 1.59,
"step": 8
},
{
"epoch": 2.5,
"learning_rate": 2.9979443021318607e-05,
"loss": 1.4015,
"step": 10
},
{
"epoch": 3.0,
"learning_rate": 2.9970400926424075e-05,
"loss": 1.2128,
"step": 12
},
{
"epoch": 3.5,
"learning_rate": 2.995971715836687e-05,
"loss": 1.1418,
"step": 14
},
{
"epoch": 4.0,
"learning_rate": 2.9947392888742566e-05,
"loss": 1.0389,
"step": 16
},
{
"epoch": 4.5,
"learning_rate": 2.9933429469046202e-05,
"loss": 1.0268,
"step": 18
},
{
"epoch": 5.0,
"learning_rate": 2.99178284305241e-05,
"loss": 0.9734,
"step": 20
},
{
"epoch": 5.5,
"learning_rate": 2.9900591484005944e-05,
"loss": 0.9755,
"step": 22
},
{
"epoch": 6.0,
"learning_rate": 2.988172051971717e-05,
"loss": 0.9142,
"step": 24
},
{
"epoch": 6.5,
"learning_rate": 2.9861217607071655e-05,
"loss": 0.8872,
"step": 26
},
{
"epoch": 7.0,
"learning_rate": 2.983908499444483e-05,
"loss": 0.8633,
"step": 28
},
{
"epoch": 7.5,
"learning_rate": 2.981532510892707e-05,
"loss": 0.8248,
"step": 30
},
{
"epoch": 8.0,
"learning_rate": 2.9789940556057574e-05,
"loss": 0.8352,
"step": 32
},
{
"epoch": 8.5,
"learning_rate": 2.9762934119538628e-05,
"loss": 0.8017,
"step": 34
},
{
"epoch": 9.0,
"learning_rate": 2.9734308760930333e-05,
"loss": 0.7705,
"step": 36
},
{
"epoch": 9.5,
"learning_rate": 2.9704067619325828e-05,
"loss": 0.7619,
"step": 38
},
{
"epoch": 10.0,
"learning_rate": 2.9672214011007087e-05,
"loss": 0.7593,
"step": 40
},
{
"epoch": 10.5,
"learning_rate": 2.9638751429081213e-05,
"loss": 0.7469,
"step": 42
},
{
"epoch": 11.0,
"learning_rate": 2.9603683543097406e-05,
"loss": 0.7477,
"step": 44
},
{
"epoch": 11.5,
"learning_rate": 2.9567014198644542e-05,
"loss": 0.716,
"step": 46
},
{
"epoch": 12.0,
"learning_rate": 2.9528747416929467e-05,
"loss": 0.7351,
"step": 48
},
{
"epoch": 12.5,
"learning_rate": 2.9488887394336025e-05,
"loss": 0.72,
"step": 50
},
{
"epoch": 13.0,
"learning_rate": 2.9447438501964873e-05,
"loss": 0.714,
"step": 52
},
{
"epoch": 13.5,
"learning_rate": 2.9404405285154146e-05,
"loss": 0.6994,
"step": 54
},
{
"epoch": 14.0,
"learning_rate": 2.9359792462981007e-05,
"loss": 0.7064,
"step": 56
},
{
"epoch": 14.5,
"learning_rate": 2.9313604927744153e-05,
"loss": 0.6807,
"step": 58
},
{
"epoch": 15.0,
"learning_rate": 2.9265847744427305e-05,
"loss": 0.6969,
"step": 60
},
{
"epoch": 15.5,
"learning_rate": 2.9216526150143788e-05,
"loss": 0.6836,
"step": 62
},
{
"epoch": 16.0,
"learning_rate": 2.9165645553562215e-05,
"loss": 0.6557,
"step": 64
},
{
"epoch": 16.5,
"learning_rate": 2.9113211534313385e-05,
"loss": 0.6619,
"step": 66
},
{
"epoch": 17.0,
"learning_rate": 2.9059229842378373e-05,
"loss": 0.6496,
"step": 68
},
{
"epoch": 17.5,
"learning_rate": 2.9003706397458025e-05,
"loss": 0.6268,
"step": 70
},
{
"epoch": 18.0,
"learning_rate": 2.894664728832377e-05,
"loss": 0.6586,
"step": 72
},
{
"epoch": 18.5,
"learning_rate": 2.8888058772149923e-05,
"loss": 0.6197,
"step": 74
},
{
"epoch": 19.0,
"learning_rate": 2.8827947273827508e-05,
"loss": 0.638,
"step": 76
},
{
"epoch": 19.5,
"learning_rate": 2.8766319385259717e-05,
"loss": 0.6093,
"step": 78
},
{
"epoch": 20.0,
"learning_rate": 2.8703181864639013e-05,
"loss": 0.6089,
"step": 80
},
{
"epoch": 20.5,
"learning_rate": 2.863854163570603e-05,
"loss": 0.6108,
"step": 82
},
{
"epoch": 21.0,
"learning_rate": 2.8572405786990293e-05,
"loss": 0.5776,
"step": 84
},
{
"epoch": 21.5,
"learning_rate": 2.8504781571032906e-05,
"loss": 0.5776,
"step": 86
},
{
"epoch": 22.0,
"learning_rate": 2.8435676403591193e-05,
"loss": 0.5997,
"step": 88
},
{
"epoch": 22.5,
"learning_rate": 2.8365097862825516e-05,
"loss": 0.5579,
"step": 90
},
{
"epoch": 23.0,
"learning_rate": 2.829305368846822e-05,
"loss": 0.579,
"step": 92
},
{
"epoch": 23.5,
"learning_rate": 2.821955178097488e-05,
"loss": 0.5689,
"step": 94
},
{
"epoch": 24.0,
"learning_rate": 2.8144600200657953e-05,
"loss": 0.5278,
"step": 96
},
{
"epoch": 24.5,
"learning_rate": 2.8068207166802843e-05,
"loss": 0.55,
"step": 98
},
{
"epoch": 25.0,
"learning_rate": 2.7990381056766583e-05,
"loss": 0.516,
"step": 100
},
{
"epoch": 25.5,
"learning_rate": 2.7911130405059155e-05,
"loss": 0.5342,
"step": 102
},
{
"epoch": 26.0,
"learning_rate": 2.78304639024076e-05,
"loss": 0.5021,
"step": 104
},
{
"epoch": 26.5,
"learning_rate": 2.774839039480296e-05,
"loss": 0.5042,
"step": 106
},
{
"epoch": 27.0,
"learning_rate": 2.7664918882530227e-05,
"loss": 0.4999,
"step": 108
},
{
"epoch": 27.5,
"learning_rate": 2.7580058519181363e-05,
"loss": 0.4847,
"step": 110
},
{
"epoch": 28.0,
"learning_rate": 2.7493818610651493e-05,
"loss": 0.467,
"step": 112
},
{
"epoch": 28.5,
"learning_rate": 2.7406208614118427e-05,
"loss": 0.462,
"step": 114
},
{
"epoch": 29.0,
"learning_rate": 2.731723813700556e-05,
"loss": 0.4576,
"step": 116
},
{
"epoch": 29.5,
"learning_rate": 2.7226916935928312e-05,
"loss": 0.45,
"step": 118
},
{
"epoch": 30.0,
"learning_rate": 2.7135254915624213e-05,
"loss": 0.4045,
"step": 120
},
{
"epoch": 30.5,
"learning_rate": 2.7042262127866718e-05,
"loss": 0.424,
"step": 122
},
{
"epoch": 31.0,
"learning_rate": 2.6947948770362945e-05,
"loss": 0.3926,
"step": 124
}
],
"logging_steps": 2,
"max_steps": 600,
"num_input_tokens_seen": 0,
"num_train_epochs": 150,
"save_steps": 500,
"total_flos": 5907688382791680.0,
"train_batch_size": 52,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

View File

@ -0,0 +1,204 @@
---
library_name: peft
base_model: /home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2

View File

@ -0,0 +1,32 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/paulius/Data/sync/RND/ncn/Mistral-7B-Instruct-v0.2-GPTQ",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"rank_pattern": {},
"revision": null,
"target_modules": [
"k_proj",
"o_proj",
"gate_proj",
"q_proj",
"v_proj",
"up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM",
"use_rslora": false
}

BIN
io-chatbot-v3/checkpoint-128/adapter_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
io-chatbot-v3/checkpoint-128/optimizer.pt (Stored with Git LFS) Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More