Introduce and explain code; provide steps for implementation. # Reproducibility Guide for Applying GANs and Diffusion Models in Font Design for Ethnic Minority Writing Systems ## Introduction To ensure reproducibility, the entire process involves the following steps: 1.Prepare the environment: set up training environment according to the requirements of models which will be used. 2.Data preprocessing: Use our code to build a dataset. 3.Train the models and use these pre-trained GANs models to generate glyph images. 4.Use DM model to optimize the glyph images generated by the pre-trained GANs models. ## Requirements The GANs models used in this study include: [font_translator_gan]: https://github.com/ligoudaner377/font_translator_gan [MF-Net ]: https://github.com/iamyufan/MF-Net [FCAGAN]: https://github.com/jtlxlf/FCAGAN These three models are reproduced and the new inference is carried out. Ensure you have the followed requirements installed. It is recommended to use a virtual environment: - NVIDIA GPU(ensure CUDA and cuDNN have been installed) - Python 3 - torch>=0.4.1 - torchvision>=0.2.1 - dominate>=2.3.1 - visdom>=0.1.8.3 copy code conda create -n glyph_env python=3.8 -y conda activate glyph_env conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia pip install dominate>=2.3.1 visdom>=0.1.8.3 ## Data Place your glyph images dataset in ./data/ (e.g., handwritten or printed fonts). Or use the data provided by font_translator_gan (https://github.com/ligoudaner377/font_translator_gan):https://drive.google.com/file/d/1XJppxR00pyk5xG-64Ia_BF12XSxeZgfa/view?pli=1 Or use our dataset(MEG-1.0):DOI 10.6084/m9.figshare.28643645 Or use our code to convert font Ethnic Minority Writing Systems file to glyph images. the code can be downloaded at:https://colab.research.google.com/drive/1942LOkBzmWVIEvsXywZ2E4-PuWU3qfh9?usp=drive_link ## Application 1.To train a GAN model: orient: MF-Net (https://github.com/iamyufan/MF-Net) copy code python train.py --dataroot ./datasets/font --model mfnet --name mfnet_train --dataset_mode font --no_dropout --gpu_ids 0 --batch_size 100 --shuffle_dataset For pre-trained weights, download and place them in ./checkpoints/. 2.To test the pre-trained model copy code python test.py --dataroot datasets\font --model mfnet --dataset_mode font --eval --no_dropout --epoch 20 --name mfnet_train 3.To enhance GAN outputs with a Diffusion Model: The checkpoint model can be downloaded in: https://civitai.com/models?tag=base+model The checkpoint model can be find in https://www.vegaai.net/imageToImage ## Results The results include pre-trained models, glyph images generated using GANs and DM.