Word2Vec pre-training models for KCC corpus (Korean language)


[Demo] http://nlp.kookmin.ac.kr/kcc/word2vec/demo

1. Test Korean word2vec, FastText pre-training models by Python ("wv_KMA_tokens_test.py" in KCC_KMA_Word2Vec.zip)

# Download FastText-KCC150.zip -- "FastText pre-training model for KCC150"
# One of the word2vec pre-training model in "http://nlp.kookmin.ac.kr/kcc/word2vec"
# Install Python and 'gensim' library, and then running
# C> pip install gensim"
# C> python

from gensim.models import Word2Vec

model_name = "FastText-KCC150.model"  # "FastText-KCC150.zip"에 있음
model = Word2Vec.load(model_name)

# 단어벡터 연산: 복합개념에서 단일개념을 제거(-)하고, 다른 단일개념을 추가(+)
# 복합개념('여배우') - 단일개념('여자') + 단일개념('남자')
# Ex1) (여배우 + 남자) - 여자 = ?
print(model.wv.most_similar(positive=[u'여배우', u'남자'], negative=[u'여자'], topn=10))
print(model.wv.most_similar(positive=[u'여왕', u'남자'], negative=[u'여자'], topn=10))

# Ex2) (서울 - 대한민국) + 일본 = ?
print(model.wv.most_similar(positive=[u'서울', u'일본'], negative=[u'대한민국'], topn=10))

# 유사 단어 topn개 출력 예제
model.wv.most_similar('보수', topn=20)
model.wv.most_similar('진보', topn=20)

model.wv.most_similar('혐오', topn=20)
model.wv.most_similar('성소수자', topn=20)

2. [Download] Korean FastText/Word2Vec models -- one of the below .zip file

FastText pre-training models for KCC corpus (Korean language)
  FastText pre-training model for KCC150 (recommended)
  FastText pre-training model for KCC460

Word2Vec pre-training models for KCC corpus (Korean language)
  Word2Vec pre-training model for KCC150 (recommended)
  Word2Vec pre-training model for KCCq28
  Word2Vec pre-training model for KCC940
  Word2Vec pre-training model for KCC460

  Word2Vec pre-training model for KCC150+q28+940
  Word2Vec pre-training model for KCC150+q28

  Word2Vec pre-training model for KCC460+150+940+q28
  Word2Vec pre-training model for KCC460+150+940
  Word2Vec pre-training model for KCC460+150

  Word2Vec pre-training model for KCCq28+150+940+460


3. Python sources for Word2Vec/FastText training

KCC_KMA_Word2Vec.zip -- word2vec training for "KMA tokenized Korean corpus".
  wv_KMA_tokens_train.py -- training one file
  wv_KMA_tokens_train_ADD.py -- training two or more files
  wv_KMA_tokens_test.py -- load pre-trained model & test

KCC_KMA_FastText_doc2vec.zip -- word2vec training for "KMA tokenized Korean corpus".
  FastText_Train.py -- training one file
  FastText_Trainrain_ADD.py -- training two or more files
  doc2vec.py -- model training & test


4. Korean tokenized raw corpus for Word2Vec/FastText training

Word2Vec training is performed by "wv_KMA_tokens_train.py" for "Korean tokenized raw corpus".
--> "Korean tokenized raw corpus" is tokenized by KLT2000 Korean morphological analyzer.
--> Download one of "Korean tokenized raw corpus" for self-training

  KMA tokenized KCC corpus for Word Embedding: KCC150 ("EUCKR" encoded file)
  KMA tokenized KCC corpus for Word Embedding: KCC940 ("EUCKR" encoded file)
  KMA tokenized KCC corpus for Word Embedding: KCCq28 ("EUCKR" encoded file)
  KMA tokenized KCC corpus for Word Embedding: KCC460 ("EUCKR" encoded file)
  --> KMA tokenized KCC corpus for Word Embedding: KCC460 ("UTF8" encoded file)
  Korean Wiki Text -- ko_wiki_text.zip

Above files are automatically created by KLT2000 Korean morphological analyzer. See below for the details.
--> You can download "KCC(Korean raw corpus)" at http://nlp.kookmin.ac.kr/kcc

  C> index2018.exe -c input.txt output.txt

  https://cafe.naver.com/nlpkang/3
  https://cafe.naver.com/nlpk/278