일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
Tags
- nlp
- cnn
- DevOps
- 챗지피티
- Bert
- AI Fairness
- Transformer
- 인공지능
- MLOps
- 딥러닝
- 자연어
- GPT-3
- 머신러닝
- trustworthiness
- 신뢰성
- 설명가능성
- ChatGPT
- 챗GPT
- gpt2
- 인공지능 신뢰성
- 트랜스포머
- 케라스
- ML
- fairness
- 지피티
- Ai
- word2vec
- GPT
- XAI
- Tokenization
Archives
- Today
- Total
목록동작원리 (1)
research notes
How GPT-2 and GPT-3 works?
*** Jay Alammar blog 필요 부분 발췌 내용 *** https://jalammar.github.io/illustrated-gpt2/ https://jalammar.github.io/how-gpt3-works-visualizations-animations/ The illustrated GPT-2 □ Looking Inside GPT-2 The simplest way to run a trained GPT-2 is to allow it to ramble on its own (which is technically called generating unconditional samples) – alternatively, we can give it a prompt to have it speak about..
GPT/개념정의
2023. 2. 26. 21:33