From 13485ea29c17788f95a8f539ec4b1a8ba5c472d2 Mon Sep 17 00:00:00 2001
From: Somefive <Somefive@foxmail.com>
Date: Tue, 22 Jun 2021 15:38:33 +0800
Subject: [PATCH] Update readme.md

Update the demo url.
---
 readme.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/readme.md b/readme.md
index 00c4304..d874739 100755
--- a/readme.md
+++ b/readme.md
@@ -10,7 +10,7 @@
 CogView is a pretrained (4B-param) transformer for text-to-image generation in general domain.
 
 * **Read** our paper [CogView: Mastering Text-to-Image Generation via Transformers](https://arxiv.org/pdf/2105.13290.pdf) on ArXiv for a formal introduction. The *PB-relax* and *Sandwich-LN* can also help you train large and deep transformers stably (e.g. eliminating NaN losses).
-* **Visit** our demo at https://lab.aminer.cn/cogview/index.html! (Without post-selection or super-resolution, currently only supports simplified Chinese input, but one can translate text from other languages into Chinese for input)
+* **Visit** our demo at [Github Page](https://thudm.github.io/CogView/index.html) or [Wudao](https://wudao.aminer.cn/CogView/)! (Without post-selection or super-resolution, currently only supports simplified Chinese input, but one can translate text from other languages into Chinese for input. Note: *Wudao* provides faster access for users from China mainland.)
 * **Download** our pretrained models from [Project Wudao-Wenhui](https://resource.wudaoai.cn/home?ind=2&name=WuDao%20WenHui&id=1399364355975327744)(悟道-文汇).
 * **Cite** our paper if you find our work is helpful~ 
 ```
-- 
GitLab