Level : WORDPRESS BOOK LINKEDIN PATENT Send Mail 동냥하기 hajunho.com

반응형

pip install --upgrade transformers

pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu

 

https://download.pytorch.org/whl/cpu

 

download.pytorch.org

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

. "$HOME/.cargo/env"

rustc --version

pip install transformers==4.31.0

rustup install 1.70.0
rustup default 1.70.0

 

 

 

 

(venv) ubuntu@ubuntu:~/LLM_MODEL$ llama model list
+-----------------------------------------+-----------------------------------------------------+----------------+
| Model Descriptor                        | Hugging Face Repo                                   | Context Length |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-8B                             | meta-llama/Llama-3.1-8B                             | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-70B                            | meta-llama/Llama-3.1-70B                            | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-405B:bf16-mp8                  | meta-llama/Llama-3.1-405B                           | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-405B                           | meta-llama/Llama-3.1-405B-FP8                       | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-405B:bf16-mp16                 | meta-llama/Llama-3.1-405B                           | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-8B-Instruct                    | meta-llama/Llama-3.1-8B-Instruct                    | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-70B-Instruct                   | meta-llama/Llama-3.1-70B-Instruct                   | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-405B-Instruct:bf16-mp8         | meta-llama/Llama-3.1-405B-Instruct                  | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-405B-Instruct                  | meta-llama/Llama-3.1-405B-Instruct-FP8              | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.1-405B-Instruct:bf16-mp16        | meta-llama/Llama-3.1-405B-Instruct                  | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-1B                             | meta-llama/Llama-3.2-1B                             | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-3B                             | meta-llama/Llama-3.2-3B                             | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-11B-Vision                     | meta-llama/Llama-3.2-11B-Vision                     | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-90B-Vision                     | meta-llama/Llama-3.2-90B-Vision                     | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-1B-Instruct                    | meta-llama/Llama-3.2-1B-Instruct                    | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-3B-Instruct                    | meta-llama/Llama-3.2-3B-Instruct                    | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-1B-Instruct:int4-qlora-eo8     | meta-llama/Llama-3.2-1B-Instruct-QLORA_INT4_EO8     | 8K             |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-1B-Instruct:int4-spinquant-eo8 | meta-llama/Llama-3.2-1B-Instruct-SpinQuant_INT4_EO8 | 8K             |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-3B-Instruct:int4-qlora-eo8     | meta-llama/Llama-3.2-3B-Instruct-QLORA_INT4_EO8     | 8K             |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-3B-Instruct:int4-spinquant-eo8 | meta-llama/Llama-3.2-3B-Instruct-SpinQuant_INT4_EO8 | 8K             |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-11B-Vision-Instruct            | meta-llama/Llama-3.2-11B-Vision-Instruct            | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.2-90B-Vision-Instruct            | meta-llama/Llama-3.2-90B-Vision-Instruct            | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama3.3-70B-Instruct                   | meta-llama/Llama-3.3-70B-Instruct                   | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama-Guard-3-11B-Vision                | meta-llama/Llama-Guard-3-11B-Vision                 | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama-Guard-3-1B:int4                   | meta-llama/Llama-Guard-3-1B-INT4                    | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama-Guard-3-1B                        | meta-llama/Llama-Guard-3-1B                         | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama-Guard-3-8B                        | meta-llama/Llama-Guard-3-8B                         | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama-Guard-3-8B:int8                   | meta-llama/Llama-Guard-3-8B-INT8                    | 128K           |
+-----------------------------------------+-----------------------------------------------------+----------------+
| Llama-Guard-2-8B                        | meta-llama/Llama-Guard-2-8B                         | 4K             |
+-----------------------------------------+-----------------------------------------------------+----------------+
(venv) ubuntu@ubuntu:~/LLM_MODEL$ 

 

llama model download --source meta --model-id  Llama3.3-70B-Instruct

 

 

 

    _|    _|  _|    _|    _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|_|_|_|    _|_|      _|_|_|  _|_|_|_|

    _|    _|  _|    _|  _|        _|          _|    _|_|    _|  _|            _|        _|    _|  _|        _|

    _|_|_|_|  _|    _|  _|  _|_|  _|  _|_|    _|    _|  _|  _|  _|  _|_|      _|_|_|    _|_|_|_|  _|        _|_|_|

    _|    _|  _|    _|  _|    _|  _|    _|    _|    _|    _|_|  _|    _|      _|        _|    _|  _|        _|

    _|    _|    _|_|      _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|        _|    _|    _|_|_|  _|_|_|_|

 

    A token is already saved on your machine. Run `huggingface-cli whoami` to get more information or `huggingface-cli logout` if you want to log out.

    Setting a new token will erase the existing one.

    To log in, `huggingface_hub` requires a token generated from https://huggingface.co/settings/tokens .

Enter your token (input will not be visible): 

Add token as git credential? (Y/n) 

Token is valid (permission: write).

The token `113` has been saved to /Users/junhoha/.cache/huggingface/stored_tokens

Your token has been saved in your configured git credential helpers (osxkeychain).

Your token has been saved to /Users/junhoha/.cache/huggingface/token

from transformers import AutoTokenizer, AutoModel

# 모델 이름
model_name = "deepseek-ai/DeepSeek-V3"

# 토크나이저와 모델 로드
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

# 테스트 입력
input_text = "이 문장을 DeepSeek-V3 모델로 테스트해 보세요."
inputs = tokenizer(input_text, return_tensors="pt")

# 모델 출력
outputs = model(**inputs)

print(outputs)

 

 

반응형

'HJH IT Logs' 카테고리의 다른 글

H100 테스트  (0) 2025.01.13
windows에서 deepseek 돌려봄.  (0) 2025.01.13
시장에서 필요한 기술스택 탐구  (0) 2025.01.12
사무실 사진  (0) 2025.01.12
즐거운 주말  (0) 2025.01.12
  • 네이버 블러그 공유하기
  • 네이버 밴드에 공유하기
  • 페이스북 공유하기
  • 카카오스토리 공유하기