logo of 激闘!口頭試問!-有名大学教授を突破せよ- 口頭試問シミュレータ on the GPT Store

激闘!口頭試問!-有名大学教授を突破せよ- 口頭試問シミュレータ on the GPT Store

Use 激闘!口頭試問!-有名大学教授を突破せよ- 口頭試問シミュレータ on ChatGPT Use 激闘!口頭試問!-有名大学教授を突破せよ- 口頭試問シミュレータ on 302.AI

GPT Description

論文の文章を貼り付ける事で、大学口頭試問のシミュレーションが出来ます

GPT Prompt Starters

  • Attention Is All You Need Ashish Vaswani∗ Google Brain avaswani@google.com Llion Jones∗ Google Research llion@google.com Noam Shazeer∗ Google Brain noam@google.com Aidan N. Gomez∗† University of Toronto aidan@cs.toronto.edu Niki Parmar∗ Google Research nikip@google.com Jakob Uszkoreit∗ Google Research usz@google.com Łukasz Kaiser∗ Google Brain lukaszkaiser@google.com Illia Polosukhin∗‡ illia.polosukhin@gmail.com Abstract The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English- to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. 1 Introduction Recurrent neural networks, long short-term memory [12] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [29, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [31, 21, 13]. ∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research. †Work performed while at Google Brain. ‡Work performed while at Google Research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [18] and conditional computation [26], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. Attention mechanisms have become an integral part of compelling sequence modeling and transduc- tion models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 16]. In all but a few cases [22], however, such attention mechanisms are used in conjunction with a recurrent network. In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. 2 Background The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [20], ByteNet [15] and ConvS2S [8], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [11]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2. Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 22, 23, 19]. End-to-end memory networks are based on a recurrent attention mechanism instead of sequence- aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [28]. To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence- aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [14, 15] and [8]. 3 Model Architecture Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 29]. Here, the encoder maps an input sequence of symbol representations (x1,...,xn) to a sequence of continuous representations z = (z1,...,zn). Given z, the decoder then generates an output sequence (y1,...,ym) of symbols one element at a time. At each step the model is auto-regressive [9], consuming the previously generated symbols as additional input when generating the next. The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. 3.1 Encoder and Decoder Stacks Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position- 2 Figure 1: The Transformer - model architecture. wise fully connected feed-forward network. We employ a residual connection [10] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm(x+ Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512. Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position ican depend only on the known outputs at positions less than i. 3.2 Attention An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. 3.2.1 Scaled Dot-Product Attention We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension dk , and values of dimension dv . We compute the dot products of the 3 Scaled Dot-Product Attention Multi-Head Attention Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel. query with all keys, divide each by √dk , and apply a softmax function to obtain the weights on the values. In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices Kand V. We compute the matrix of outputs as: )V (1) Attention(Q,K,V) = softmax( QKT √dk The two most commonly used attention functions are additive attention [2], and dot-product (multi- plicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1 √dk . Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dk [3]. We suspect that for large values of dk , the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4. To counteract this effect, we scale the dot products by 1 √dk . 3.2.2 Multi-Head Attention Instead of performing a single attention function with dmodel-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values htimes with different, learned linear projections to dk , dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dv -dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2. Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. 4To illustrate why the dot products get large, assume that the components of q and k are independent random variables with mean 0 and variance 1. Then their dot product, q· k= dk i=1 qiki, has mean 0 and variance dk. 4 MultiHead(Q,K,V) = Concat(head1,...,headh)WO where headi = Attention(QWQ i ,KWK i ,VWV i ) Where the projections are parameter matrices WQ i ∈Rdmodel×dk and WO ∈Rhdv ×dmodel . , WK i ∈Rdmodel×dk , WV i ∈Rdmodel×dv In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h= 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. 3.2.3 Applications of Attention in our Model The Transformer uses multi-head attention in three different ways: • In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [31, 2, 8]. • The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. • Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2. 3.3 Position-wise Feed-Forward Networks In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. (2) FFN(x) = max(0,xW1 + b1)W2 + b2 While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality df f = 2048. 3.4 Embeddings and Softmax Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor- mation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [24]. In the embedding layers, we multiply those weights by √dmodel. 3.5 Positional Encoding Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the 5 Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations for different layer types. nis the sequence length, dis the representation dimension, kis the kernel size of convolutions and rthe size of the neighborhood in restricted self-attention. Layer Type Complexity per Layer Sequential Maximum Path Length Operations Self-Attention O(n2 ·d) O(1) O(1) Recurrent O(n·d2) O(n) O(n) Convolutional O(k·n·d2) O(1) O(logk (n)) Self-Attention (restricted) O(r·n·d) O(1) O(n/r) bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [8]. In this work, we use sine and cosine functions of different frequencies: PE(pos,2i)= sin(pos/100002i/dmodel ) PE(pos,2i+1)= cos(pos/100002i/dmodel ) where posis the position and iis the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2πto 10000·2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, PEpos+k can be represented as a linear function of PEpos. We also experimented with using learned positional embeddings [8] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. 4 Why Self-Attention In this section we compare various aspects of self-attention layers to the recurrent and convolu- tional layers commonly used for mapping one variable-length sequence of symbol representations (x1,...,xn) to another sequence of equal length (z1,...,zn), with xi,zi ∈Rd, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata. One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [11]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types. As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [31] and byte-pair [25] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size rin 6 the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work. A single convolutional layer with kernel width k<ndoes not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(logk (n)) in the case of dilated convolutions [15], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity considerably, to O(k·n·d+ n·d2). Even with k= n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model. As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. 5 Training This section describes the training regime for our models. 5.1 Training Data and Batching We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared source- target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [31]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens. 5.2 Hardware and Schedule We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). 5.3 Optimizer We used the Adam optimizer [17] with β1 = 0.9, β2 = 0.98 and ϵ= 10−9. We varied the learning rate over the course of training, according to the formula: lrate= d−0.5 model·min(step_ num−0.5,step_ num·warmup_steps−1.5) (3) This corresponds to increasing the learning rate linearly for the first warmup_stepstraining steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps= 4000. 5.4 Regularization We employ three types of regularization during training: Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop = 0.1. 7 Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the English-to-German and English-to-French newstest2014 tests at a fraction of the training cost. Model BLEU Training Cost (FLOPs) EN-DE EN-FR EN-DE EN-FR ByteNet [15] 23.75 Deep-Att + PosUnk [32] 39.2 1.0·1020 GNMT + RL [31] 24.6 39.92 2.3·1019 1.4·1020 ConvS2S [8] 25.16 40.46 9.6·1018 1.5·1020 MoE [26] 26.03 40.56 2.0·1019 1.2·1020 Deep-Att + PosUnk Ensemble [32] 40.4 8.0·1020 GNMT + RL Ensemble [31] 26.30 41.16 1.8·1020 1.1·1021 ConvS2S Ensemble [8] 26.36 41.29 7.7·1019 1.2·1021 Transformer (base model) 27.3 38.1 3.3· 1018 Transformer (big) 28.4 41.0 2.3·1019 Label Smoothing During training, we employed label smoothing of value ϵls = 0.1 [30]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. 6 Results 6.1 Machine Translation On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models. On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3. For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty α= 0.6 [31]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [31]. Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5 . 6.2 Model Variations To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3. In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads. 5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively. 8 Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base model. All metrics are on the English-to-German translation development set, newstest2013. Listed perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to per-word perplexities. N dmodel dff h dk dv Pdrop ϵls train steps PPL BLEU params (dev) (dev) ×106 base 6 512 2048 8 64 64 0.1 0.1 100K 4.92 25.8 65 (A) 1 512 512 4 128 128 16 32 32 32 16 16 5.29 24.9 5.00 25.5 4.91 25.8 5.01 25.4 (B) 16 32 5.16 25.1 58 5.01 25.4 60 (C) 2 6.11 23.7 36 4 5.19 25.3 50 8 4.88 25.5 80 256 32 32 5.75 24.5 28 1024 128 128 4.66 26.0 168 1024 5.12 25.4 53 4096 4.75 26.2 90 0.0 0.2 (D) 0.0 0.2 5.77 24.6 4.95 25.5 4.67 25.3 5.47 25.7 (E) positional embedding instead of sinusoids 4.92 25.7 big 6 1024 4096 16 0.3 300K 4.33 26.4 213 In Table 3 rows (B), we observe that reducing the attention key size dk hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [8], and observe nearly identical results to the base model. 7 Conclusion In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles. We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours. The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor. Acknowledgements comments, corrections and inspiration. We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful 9 References [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. [3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural machine translation architectures. CoRR, abs/1703.03906, 2017. [4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016. [5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014. [6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016. [7] Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014. [8] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolu- tional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2, 2017. [9] Alex Graves. arXiv:1308.0850, 2013. Generating sequences with recurrent neural networks. arXiv preprint [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for im- age recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [11] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. [12] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [13] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. [14] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference on Learning Representations (ICLR), 2016. [15] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Ko- ray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099v2, 2017. [16] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks. In International Conference on Learning Representations, 2017. [17] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [18] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint arXiv:1703.10722, 2017. [19] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017. [20] Samy Bengio Łukasz Kaiser. Can active memory replace attention? In Advances in Neural Information Processing Systems, (NIPS), 2016. 10 [21] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025, 2015. [22] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model. In Empirical Methods in Natural Language Processing, 2016. [23] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017. [24] Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016. [25] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015. [26] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. [27] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014. [28] Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015. [29] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014. [30] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015. [31] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. [32] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. CoRR, abs/1606.04199, 2016. 11
  • Induction of Pluripotent Stem Cells from Adult Human Fibroblasts by Defined Factors Kazutoshi Takahashi1, Koji Tanabe1, Mari Ohnuki1, Megumi Narita1,2, Tomoko Ichisaka1,2 , Kiichiro Tomoda3 and Shinya Yamanaka1–4* 1Department of Stem Cell Biology, Institute for Frontier Medical Sciences, Kyoto University, Kyoto 606-8507, Japan 2CREST, Japan Science and Technology Agency, Kawaguchi 332-0012, Japan 3Gladstone Institute of Cardiovascular Disease, San Francisco, CA 94158 4Institute for Integrated Cell-Material Sciences, Kyoto University, Kyoto 606-8507, Japan *Contact: yamanaka@frontier.kyoto-u.ac.jp 1 SUMMARY Successful reprogramming of differentiated human somatic cells into a pluripotent state would allow creation of patient- and disease-specific stem cells. We previously reported generation of induced pluripotent stem (iPS) cells, capable of germline transmission, from mouse somatic cells by transduction of four defined transcription factors. Here, we demonstrate the generation of iPS cells from adult human dermal fibroblasts with the same four factors: Oct3/4, Sox2, Klf4, and c-Myc. Human iPS cells were similar to human embryonic stem (ES) cells in morphology, proliferation, surface antigens, gene expression, epigenetic status of pluripotent cell-specific genes, and telomerase activity. Furthermore, these cells could differentiate into cell types of the three germ layers in vitro and in teratomas. These findings demonstrate that iPS cells can be generated from adult human fibroblasts. 2 INTRODUCTION Embryonic stem (ES) cells, derived from the inner cell mass of mammalian blastocysts, have the ability to grow indefinitely while maintaining pluripotency(Evans and Kaufman, 1981; Martin, 1981). These properties have led to expectations that human ES cells might be useful to understand disease mechanisms, to screen effective and safe drugs, and to treat patients of various diseases and injuries, such as juvenile diabetes and spinal cord injury(Thomson et al., 1998). Use of human embryos, however, faces ethical controversies that hinder the applications of human ES cells. In addition, it is difficult to generate patient- or disease-specific ES cells, which are required for their effective application. One way to circumvent these issues is to induce pluripotent status in somatic cells by direct reprogramming(Yamanaka, 2007). We showed that induced pluripotent stem (iPS) cells can be generated from mouse embryonic fibroblasts (MEF) and adult mouse tail-tip fibroblasts by the retrovirus-mediated transfection of four transcription factors, namely Oct3/4, Sox2, c-Myc, and Klf4(Takahashi and Yamanaka, 2006). Mouse iPS cells are indistinguishable from ES cells in morphology, proliferation, gene expression, and teratoma formation. Furthermore, when transplanted into blastocysts, mouse iPS cells can give rise to adult chimeras, which are competent for germline transmission(Maherali et al., 2007; Okita et al., 2007; Wernig et al., 2007). These results are proof-of-principle that pluripotent stem cells can be generated from somatic cells by the combination of a small number of factors. In the current study, we sought to generate iPS cells from adult human somatic cells by optimizing retroviral transduction in human fibroblasts and subsequent culture 3 conditions. These efforts have enabled us to generate iPS cells from adult human dermal fibroblasts and other human somatic cells, which are comparable to human ES cells in their differentiation potential in vitro and in teratomas. 4 RESULTS Optimization of Retroviral Transduction for Generating Human iPS Cells Induction of iPS cells from mouse fibroblasts requires retroviruses with high transduction efficiencies(Takahashi and Yamanaka, 2006). We, therefore, optimized transduction methods in adult human dermal fibroblasts (HDF). We first introduced green fluorescent protein (GFP) into adult HDF with amphotropic retrovirus produced in PLAT-A packaging cells. As a control, we introduced GFP to mouse embryonic fibroblasts (MEF) with ecotropic retrovirus produced in PLAT-E packaging cells(Morita et al., 2000). In MEF, more than 80% of cells expressed GFP (S-Figure 1). In contrast, less that 20% of HDF expressed GFP with significantly lower intensity than in MEF. To improve the transduction efficiency, we introduced the mouse receptor for retroviruses, Slc7a1(Verrey et al., 2004) (also known as mCAT1), into HDF with lentivirus. We then introduced GFP to HDF-Slc7a1 with ecotropic retrovirus. This strategy yielded a transduction efficiency of 60%, with a similar intensity to that in MEF. Generation of iPS Cells from Adult HDF The protocol for human iPS cell induction is summarized in Figure 1A. We introduced the retroviruses containing human Oct3/4, Sox2, Klf4 and c-Myc into HDF-Slc7a1 (Figure 1B, 8 x 105 cells per 100 mm dish). The HDF derived from facial dermis of 36-year-old Caucasian female. Six days after transduction, the cells were harvested by trypsinization and plated onto mitomycin C-treated SNL feeder cells(McMahon and Bradley, 1990) at 5 x 104 or 5 x 105 cells per 100-mm dish. The next day, the medium (DMEM containing 10% 5 FBS) was replaced with a medium for primate ES cell culture supplemented with 4 ng/ml basic fibroblast growth factor (bFGF). Approximately two weeks later, some granulated colonies appeared that were not similar to hES cells in morphology (Figure 1C). Around day 25, we observed distinct types of colonies that were flat and resembled hES cell colonies (Figure 1D). From 5 x 104 fibroblasts, we observed ~10 hES cell–like colonies and ~100 granulated colonies (7/122, 8/84, 8/171, 5/73, 6/122 and 11/213 in six independent experiments, summarized in Supplemental Table 1). At day 30, we picked hES cell–like colonies and mechanically disaggregated them into small clumps without enzymatic digestion. When starting with 5 x 105 fibroblasts, the dish was nearly covered with more than 300 granulated colonies. We occasionally observed some hES cell–like colonies in between the granulated cells, but it was difficult to isolate hES cell–like colonies because of the high density of granulated cells. The nature of the non- hES-like cells remains to be determined. The hES-like cells expanded on SNL feeder cells with the primate ES cell medium containing bFGF. They formed tightly packed and flat colonies (Figure 1E). Each cell exhibited morphology similar to that of human ES cells, characterized by large nuclei and scant cytoplasm (Figure 1F). As is the case with hES cells, we occasionally observed spontaneous differentiation in the center of the colony (Fig. 1G). These cells also showed similarity to hES cells in feeder dependency (S-Figure 2). They did not attach to gelatin-coated tissue-culture plates. By contrast, they maintained an undifferentiated state on Matrigel-coated plates in MEF-conditioned primate ES cell medium, but not in non-conditioned medium. 6 Since these cells were similar to hES cells in morphology and other aspects noted above, we will refer to the selected cells after transduction of HDF as human iPS cells, as we describe the molecular and functional evidence for this claim. Human iPS cells clones established in this study are summarized in S-Table 2. Human iPS Cells Express hES Markers In general, except for a few cells at the edge of the colonies, human iPS cells did not express stage-specific embryonic antigen (SSEA)-1 (Figure 1H). In contrast, they expressed hES cell–specific surface antigens(Adewumi et al., 2007), including SSEA-3, SSEA-4, tumor-related antigen (TRA) -1-60, TRA-1-81 and TRA-2-49/6E (alkaline phosphatase), and NANOG protein (Fig. 1I~N). RT-PCR showed human iPS cells expressed many undifferentiated ES cell marker genes(Adewumi et al., 2007), such as OCT3/4, SOX2, NANOG, growth and differentiation factor 3 (GDF3), reduced expression 1 (REX1), fibroblast growth factor 4 (FGF4), embryonic cell–specific gene 1 (ESG1), developmental pluripotency–associated 2 (DPPA2), DPPA4 and telomerase reverse transcriptase (hTERT) at levels equivalent to or higher than those in the hES cell line H9 and the human embryonic carcinoma cell line, NTERA-2 (Figure 2A). By western blotting, proteins levels of OCT3/4, SOX2, NANOG, SALL4, E-CADHERIN, and hTERT were similar in human iPS cells and hES cells (Figure 2B). Although the expression levels of Klf4 and c-Myc increased more than five fold in HDF after the retroviral transduction (not shown), their expression levels in human iPS cells were comparable to those in HDF (Figure 2A & B), indicating retroviral silencing. RT-PCR 7 using primers specific for retroviral transcripts confirmed efficient silencing of all the four retroviruses (Figure 2C). DNA microarray analyses showed that the global gene expression patters are similar, but not identical, between human iPS cells and hES cells (Figure 2D). Among 32266 gene analyzed, 5107 genes showed more than 5-fold difference in expression between HDF and human iPS cells, whereas 1267 genes between human iPS cells and hES cells (S-Table 3 & 4). Promoters of ES Cell–Specific Genes Are Active in Human iPS Cells Bisulfite genomic sequencing analyses evaluating the methylation statuses of cytosine guanine dinucleotides (CpG) in the promoter regions of pluripotent-associated genes, such as OCT3/4, REX1 and NANOG, revealed that they were highly unmethylated, whereas the CpG dinucleotides of the regions were highly methylated in parental HDFs (Figure 3A). These findings indicate that these promoters are active in human iPS cells. Luciferase reporter assays also showed that human OCT3/4 and REX1 promoters had high levels of transcriptional activity in human iPS cells and EC cells (NTERA-2), but not in HDF. The promoter activities of ubiquitously expressed genes, such as human RNA polymerase II (PolII), showed similar activities in both human iPS cells and HDF (Figure 3B). We also performed chromatin immunoprecipitation to analyze the histone modification status in human iPS cells (Figure 3C). We found that histone H3 lysine 4 was methylated whereas H3 lysine 27 was demethylated in the promoter regions of Oct3/4, Sox2, and Nanog in human iPS cells. We also found that human iPS cells showed the 8 bivalent patterns of development-associated genes, such as Gata6, Msx2, Pax6, and Hand1. These histone modification statuses are characteristic of hES cells (Pan et al., 2007). High Telomerase Activity and Exponential Growth of Human iPS Cells As predicted from the high expression levels of hTERT, human iPS cells showed high telomerase activity (Figure 4A). They proliferated exponentially for as least 4 months (Figure 4B). The calculated population doubling time of human iPS cells were 46.9 ± 12.4 (clone 201B2), 47.8 ± 6.6 (201B6) and 43.2 ± 11.5 (201B7) hours. These times are equivalent to the reported doubling time of hES cells(Cowan et al., 2004). Embryoid Body–Mediated Differentiation of Human iPS Cells To determine the differentiation ability of human iPS cells in vitro, we used floating cultivation to form embryoid bodies (EBs)(Itskovitz-Eldor et al., 2000). After 8 days in suspension culture, iPS cells formed ball-shaped structures (Figure 5A). We transferred these embryoid body–like structures to gelatin-coated plates and continued cultivation for another 8 days. Attached cells showed various types of morphologies, such as those resembling neuronal cells, cobblestone-like cells, and epithelial cells (Figure 5B–E). Immunocytochemistry detected cells positive for βIII-tubulin (a marker of ectoderm), glial fibrillary acidic protein (GFAP, ectoderm), α-smooth muscle actin (α-SMA, mesoderm), desmin (mesoderm), α-fetoprotein (AFP, endoderm), and vimentin (mesoderm and parietal endoderm) (Figure 5F–K). RT-PCR confirmed that these differentiated cells expressed forkhead box A2 (FOXA2, a marker of endoderm), AFP (endoderm), cytokeratin 8 and 18 9 (endoderm), SRY-box containing gene 17 (SOX17, endoderm), BRACHYURY (mesoderm), Msh homeobox 1 (MSX1, mesoderm), microtubule-associated protein 2 (MAP2, ectoderm) and paired box 6 (PAX6, ectoderm) (Figure 5L). In contrast, expression of OCT3/4, SOX2 and NANOG was markedly decreased. These data demonstrated that iPS cells could differentiate into three germ layers in vitro. Directed Differentiation of Human iPS Cells into Neural Cells We next examined whether lineage-directed differentiation of human iPS cells could be induced by reported methods for hES cells. We seeded human iPS cells on PA6 feeder layer and maintained them under differentiation conditions for two weeks(Kawasaki et al., 2000). Cells spread drastically, and some neuronal structures were observed (Figure 6A). Immunocytochemistry detected cells positive for tyrosine hydroxylase and βIII tubulin in the culture (Figure 6B). PCR analysis revealed expression of dopaminergic neuron markers, such as aromatic-L–amino acid decarboxylase (AADC), member 3 (DAT), choline acetyltransferase (ChAT), and LIM homeobox transcription factor 1 beta (LMX1B), as well as another neuron marker, MAP2 (Figure 6C). In contrast, GFAP expression was not induced with this system. On the other hand, the expression of OCT3/4 and NANOG decreased markedly whereas Sox decreased only slightly (Figure 6C). These data demonstrated that iPS cells could differentiate into neuronal cells, including dopaminergic neurons, by co-culture with PA6 cells. Directed Differentiation of Human iPS Cells into Cardiac Cells 10 We next examined directed cardiac differentiation of human iPS cells with the recently reported protocol which utilizes activin A and bone morphogenetic protein (BMP) 4(Laflamme et al., 2007). Twelve days after the induction of differentiation, clumps of cells started beating (Figure 6D, Supplemental movie). RT-PCR showed that these cells expressed cardiomyocyte markers, such as troponin T type 2 cardiac (TnTc), myocyte enhancer factor 2C (MEF2C), myosin, light polypeptide 7, regulatory (MYL2A), myosin, heavy polypeptide 7, cardiac muscle, beta (MYHCB), and NK2 transcription factor related, locus 5 (NKX2.5) (Figure 6E). In contrast, the expression of Oct3/4, Sox2, and Nanog markedly decreased. Thus, human iPS cells can differentiate into cardiac myocytes in vitro. Teratoma Formation from Human iPS Cells To test pluripotency in vivo, we transplanted human iPS cells (clone 201B7) subcutaneously into dorsal flanks of immunodeficient (SCID) mice. Nine weeks after injection, we observed tumor formation. Histological examination showed that the tumor contained various tissues (Figure 7), including gut-like epithelial tissues (endoderm), striated muscle (mesoderm), cartilage (mesoderm), neural tissues (ectoderm), and keratin-containing epidermal tissues (ectoderm). Human iPS Cells Are Derived from HDF, not Cross-Contamination PCR of genomic DNA of human iPS cells showed that all clones have integration of all the four retroviruses (S-Figure 3A). Southern blot analysis with a c-Myc cDNA probe revealed that each clone had a unique pattern of retroviral integration sites (S-Figure 3B). In addition, 11 the patterns of 16 short tandem repeats were completely matched between human iPS clones and parental HDF (S-Table 5). These patterns differed from any established hES cell lines reported on National Institutes of Health website (http://stemcells.nih.gov/research/nihresearch/scunit/genotyping.htm). In addition, chromosomal G-band analyses showed that human iPS cells had a normal karyotype of 46XX (not shown). Thus, human iPS clones were derived from HDF and were not a result of cross-contamination. Whether generation of human iPS cells depends on minor genetic or epigenetic modification awaits further investigation. Generation of iPS Cells from Other Human Somatic Cells In addition to HDF, we used primary human fibroblast-like synoviocytes (HFLS) from synovial tissue of 69-year-old Caucasian male and BJ cells, a cell line established from neonate fibroblasts (S-Table 1 & 2). From HFLS (5 x 104 cells per 100-mm dish), we obtained more than 600 hundred granulated colonies and 17 hES cell–like colonies (S-Table 1). We picked six colonies, of which only two were expandable as iPS cells (S-figure 4). Dishes seeded with 5 x 105 HFLS were covered with granulated cells, and no hES cell-like colonies were distinguishable. In contrast, we obtained 7–8 and ~100 hES cell-like colonies from 5 x 104 and 5 x 105 BJ cells, respectively, with only a few granulated colonies (S-Table 1). We picked six hES cell–like colonies and generated iPS cells from five colonies (S-Figure 4). Human iPS cells derived from HFLS and BJ expressed hES cell marker genes at levels similar to or higher than those in hES cells (S-Figure 5). They differentiated into all three germ layers through EBs (S-Figure 6). STR analyses confirmed 12 that iPS-HFLS cells and iPS-BJ cells were derived from HFLS and BJ fibroblasts, respectively (S-Table 6 & 7). 13 DISCUSSION In this study, we showed that iPS cells can be generated from adult HDF and other somatic cells by retroviral transduction of the same four transcription factors with mouse iPS cells, namely Oct3/4, Sox2, Klf4, and c-Myc. The established human iPS cells are similar to hES cells in many aspects, including morphology, proliferation, feeder dependence, surface markers, gene expression, promoter activities, telomerase activities, in vitro differentiation, and teratoma formation. The four retroviruses are strongly silenced in human iPS cells, indicating that these cells are efficiently reprogrammed and do not depend on continuous expression of the transgenes for self-renewal. hES cells are different from mouse counterparts in many respects(Rao, 2004). hES cell colonies are flatter and do not override each other. hES cells depend on bFGF for self-renewal(Amit et al., 2000), whereas mouse ES cells depend on the LIF/Stat3 pathway(Matsuda et al., 1999; Niwa et al., 1998). BMP induces differentiation in hES cells(Xu et al., 2005) but is involved in self-renewal of mouse ES cells(Ying et al., 2003). Despite these differences, our data show that the same four transcription factors induce iPS cells in both human and mouse. The four factors, however, could not induce human iPS cells when fibroblasts were kept under the culture condition for mouse ES cells after retroviral transduction (data not shown). These data suggest that the fundamental transcriptional network governing pluripotency is common in human and mice, but extrinsic factors and signals maintaining pluripotency are unique for each species. Deciphering of the mechanism by which the four factors induce pluripotency in somatic cells remains elusive. The function of Oct3/4 and Sox2 as core transcription factors 14 to determine pluripotency is well documented(Boyer et al., 2005; Loh et al., 2006; Wang et al., 2006). They synergistically up-regulate “stemness” genes, while suppressing differentiation-associated genes in both mouse and human ES cells. However, they cannot bind their targets genes in differentiated cells, because of other inhibitory mechanisms, including DNA methylation and histone modifications. We speculate that c-Myc and Klf4 modifies chromatin structure so that Oct3/4 and Sox2 can bind to their targets(Yamanaka, 2007). Notably, Klf4 interacts with p300 histone acetyltransferase and regulates gene transcription by modulating histone acetylation(Evans et al., 2007). The negative role of c-Myc in the self-renewal of hES cells was recently reported(Sumi et al., 2007). They showed that forced-expression of c-Myc induced differentiation and apoptosis of human ES cells. This is great contrast to the positive role of c-Myc in mouse ES cells(Cartwright et al., 2005). During iPS cell generation, transgenes derived from retroviruses are silenced when the transduced fibroblasts acquire ES-like state. The role of c-Myc in establishing iPS cells may be as a booster of reprogramming, rather than a controller of maintenance of pluripotency. We found that each iPS clone contained 3–6 retroviral integrations for each factor. Thus, each clone had more than 20 retroviral integration sites in total, which may increase the risk of tumorigenesis. In the case of mouse iPS cells, ~20% of mice derived from iPS cells developed tumors, which were attributable, at least in part, to reactivation of the c-Myc retrovirus (Okita et al., 2007). This issue must be overcome to use iPS cells in human therapies. We have recently found that iPS cells can be generated without Myc retroviruses, albeit with lower efficiency (Nakagawa, M., Koyanagi, M., and Yamanaka, S., 15 submitted). Non-retroviral methods to introduce the remaining three factors, such as adenoviruses or cell-permeable recombinant proteins, should be examined in future studies. Alternatively, one might be able to identify small molecules that can induce iPS cells, without gene transfer. As is the case with mouse iPS cells, only a small portion of human fibroblasts that had been transduced with the four retroviruses acquired iPS cell identity. We obtained ~10 iPS cells colonies from 5 x 104 transduced HDF. From a practical point of view, this efficiency is sufficiently high since multiple iPS cell clones can be obtained from a single experiment. From a scientific point of view, however, the low efficiency raises several possibilities. First, the origin of iPS cells may be undifferentiated stem or progenitor cells co-existing in fibroblast culture. Another possibility is that retroviral integration into some specific loci may be required for iPS cell induction. Finally, minor genetic alterations, which could not be detected by karyotype analyses, or epigenetic alterations are required for iPS cell induction. These issues need to be elucidated in future studies. Our study has opened an avenue to generate patient- and disease-specific pluripotent stem cells. Even with the presence of retroviral integration, human iPS cells are useful for understanding disease mechanisms, drug screening, and toxicology. For example, hepatocytes derived from iPS cells with various genetic and disease backgrounds can be utilized in predicting liver toxicity of drug candidates. Once the safety issue is overcome, human iPS cells should also be applicable in regenerative medicine. Human iPS cells, however, are not identical to hES cells: DNA microarray analyses detected differences between the two pluripotent stem cell lines. Further studies are essential to determine 16 whether human iPS cells can replace hES in medical applications. 17 EXPERIMENTAL PROCEDURES Cell Culture HDF from facial dermis of 36-year-old Caucasian female and HFLS from synovial tissue of 69-year-old Caucasian male were purchased from Cell Applications, Inc. When received, the population doubling was less than 16 in HDF and 5 in HFLS. We used these cells for the induction of iPS cells within six and four passages after the receipt. BJ fibroblasts from neonatal foreskin and NTERA-2 clone D1 human embryonic carcinoma cells were obtained from American Type Culture Collection. Human fibroblasts, NTERA-2, PLAT-E and PLAT-A cells were maintained in Dulbecco’s modified eagle medium (DMEM, Nacalai Tesque, Japan) containing 10% fetal bovine serum (FBS, Japan Serum) and 0.5% penicillin and streptomycin (Invitrogen). 293FT cells were maintained in DMEM containing 10% FBS, 2 mM L-glutamine (Invitrogen), 1 x 10-4 M nonessential amino acids (Invitrogen), 1 mM sodium pyruvate (Sigma) and 0.5% penicillin and streptomycin. PA6 stroma cells (RIKEN Bioresource Center, Japan) were maintained in α-MEM containing 10% FBS and 0.5% penicillin and streptomycin. iPS cells were generated and maintained in Primate ES medium (ReproCELL, Japan) supplemented with 4 ng/ml recombinant human basic fibroblast growth factor (bFGF, WAKO, Japan). For passaging, human iPS cells were washed once with PBS, and then incubated with DMEM/F12 containing 1 mg/ml collagenase IV (Invitrogen) at 37oC. When colonies at the edge of the dish started dissociating from the bottom, DMEF/F12/collangenase was removed and washed with primate ES cell medium. Cells were scraped and collected into 15-ml conical tube. An appropriate volume of the medium was added, and the contents were transferred to a new 18 dish on SNL feeder cells. The split ratio was routinely 1:3. For feeder-free culture of iPS cells, the plate was coated with 0.3 mg/ml Matrigel (Growth factor reduced, BD Biosciences) at 4oC overnight. The plate was warmed to room temperature before use. Unbound Matrigel was aspirated off and washed out with DMEM/F12. iPS cells were seeded on Matrigel-coated plate in MEF-conditioned or non-conditioned primate ES cell medium, both supplemented with 4 ng/ml bFGF. The medium was changed daily. For preparation of MEF-conditioned medium, MEFs derived from embryonic day 13.5 embryo pool of ICR mice were plated at 1 x 106 cells per 100-mm dish and incubated overnight. Next day, the cells were washed once with PBS, and cultured in 10 ml of primate ES cell medium. After twenty-four hour incubation, the supernatant of MEF culture was collected, filtered through a 0.22-μm pore-size filter, and stored at -20oC until use. Plasmid Construction The open reading frame of human OCT3/4 was amplified by RT-PCR and cloned into pCR2.1-TOPO. An EcoRI fragment of pCR2.1-hOCT3/4 was introduced into the EcoRI site of pMXs retroviral vector. To discriminate each experiment, we introduced a 20-bp random sequence, which we designated N20 barcode, into the NotI/SalI site of Oct3/4 expression vector. We used a unique barcode sequence in each experiment to avoid inter-experimental contamination. The open reading frames of human SOX2, KLF4 and c-MYC were also amplified by RT-PCR and subcloned into pENTR-D-TOPO (Invitrogen). All of the genes subcloned into pENTR-D-TOPO were transferred to pMXs by using the Gateway cloning system (Invitrogen), according to the manufacturer’s instructions. Mouse 19 Slc7a1 ORF was also amplified, subcloned into pENTR-D-TOPO, and transferred to pLenti6/UbC/V5-DEST (Invitrogen) by the Gateway system. The regulatory regions of the human OCT3/4 gene and the REX1 gene were amplified by PCR and subcloned into pCRXL-TOPO (Invitrogen). For phOCT4-Luc and phREX1-Luc, the fragments were removed by KpnI/BglII digestion from pCRXL vector and subcloned into the KpnI/BglII site of pGV-BM2. For pPolII-Luc, an AatII (blunted)/NheI fragment of pQBI-polII was inserted into the KpnI (blunted)/NheI site of pGV-BM2. All of the fragments were verified by sequencing. Primer sequences are shown in S-Table 8. Lentivirus Production and Infection 293FT cells (Invitrogen) were plated at 6 x 106 cells per 100-mm dish, and incubated overnight. Cells were transfected with 3 μg of pLenti6/UbC-Slc7a1 along with 9 μg of Virapower packaging mix by Lipofectamine 2000 (Invitrogen), according to the manufacturer’s instructions. Forty-eight h after transfection, the supernatant of transfectant was collected and filtered through a 0.45-μm pore-size cellulose acetate filter (Whatman). Human fibroblasts were seeded at 8 x 105 cells per 100-mm dish 1 day before transduction. The medium was replaced with virus-containing supernatant supplemented with 4 μg/ml polybrene (Nacalai Tesque), and incubated for 24 h. Retroviral Infection and iPS Cell Generation PLAT-E packaging cells were plated at 8 x 106 cells per 100-mm dish and incubated overnight. Next day, the cells were transfected with pMXs vectors with Fugene 6 20 transfection reagent (Roche). Twenty-four h after transfection, the medium was collected as the first virus-containing supernatant and replaced with a new medium, which was collected after 24 h as the second virus-containing supernatant. Human fibroblasts expressing mouse Slc7a1 gene were seeded at 8 x 105 cells per 100-mm dish 1 day before transduction. The virus-containing supernatants were filtered through a 0.45-μm pore-size filter, and supplemented with 4 μg/ml polybrene. Equal amounts of supernatants containing each of the four retroviruses were mixed, transferred to the fibroblast dish, and incubated overnight. Twenty-four h after transduction, the virus-containing medium was replaced with the second supernatant. Six days after transduction, fibroblasts were harvested by trypsinization and re-plated at 5 x 104 cells per 100-mm dish on an SNL feeder layer. Next day, the medium was replaced with hES medium supplemented with 4 ng/ml bFGF. The medium was changed every other day. Thirty days after transduction, colonies were picked up and transferred into 0.2 ml of hES cell medium. The colonies were mechanically dissociated to small clamps by pipeting up and down. The cell suspension was transferred on SNL feeder in 24-well plates. We defined this stage as passage 1. RNA Isolation and Reverse Transcription Total RNA was purified with Trizol reagent (Invitrogen) and treated with Turbo DNA-free kit (Ambion) to remove genomic DNA contamination. One microgram of total RNA was used for reverse transcription reaction with ReverTraAce-α (Toyobo, Japan) and dT20 primer, according to the manufacturer’s instructions. PCR was performed with ExTaq (Takara, Japan). Quantitative PCR was performed with Platinum SYBR Green qPCR 21 Supermix UDG (Invitrogen) and analyzed with the 7300 real-time PCR system (Applied Biosystems). Primer sequences are shown in S-Table 8. Alkaline Phosphatase Staining and Immunocytochemistry Alkaline phosphatase staining was performed using the Leukocyte Alkaline Phosphatase kit (Sigma). For immunocytochemistry, cells were fixed with PBS containing 4% paraformaldehyde for 10 min at room temperature. After washing with PBS, the cells were treated with PBS containing 5% normal goat or donkey serum (Chemicon), 1% bovine serum albumin (BSA, Nacalai tesque), and 0.1% TritonX-100 for 45 min at room temperature. Primary antibodies included SSEA1 (1:100, Developmental Studies Hybridoma Bank), SSEA3 (1:10, a kind gift from Dr. Peter W. Andrews), SSEA4 (1:100, Developmental Studies Hybridoma Bank), TRA-2-49/6E (1:20, Developmental Studies Hybridoma Bank), TRA-1-60 (1:50, a kind gift from Dr. Peter W. Andrews), TRA-1-81 (1:50, a kind gift from Dr. Peter W. Andrews), Nanog (1:20, AF1997, R&D Systems), βIII-tubulin (1:100, CB412, Chemicon), glial fibrillary acidic protein (1:500, Z0334, DAKO), α-smooth muscle actin (pre-diluted, N1584, DAKO), desmin (1:100, RB-9014, Lab Vision), vimentin (1:100, SC-6260, Santa Cruz), α-fetoprotein (1:100, MAB1368, R&D Systems), tyrosine hydroxylase (1:100, AB152, Chemicon). Secondary antibodies used were cyanine 3 (Cy3) –conjugated goat anti-rat IgM (1:500, Jackson Immunoresearch), Alexa546-conjugated goat anti-mouse IgM (1:500, Invitrogen), Alexa488-conjugated goat anti-rabbit IgG (1:500, Invitrogen), Alexa488-conjugated donkey anti-goat IgG (1:500, Invitrogen), Cy3-conjugated goat anti-mouse IgG (1:500, Chemicon), and 22 Alexa488-conjugated goat anti-mouse IgG (1:500, Invitrogen). Nucleuses were stained with 1 μg/ml Hoechst 33342 (Invitrogen). In Vitro Differentiation For EB formation, human iPS cells were harvested by treating with collagenase IV. The clumps of the cells were transferred to poly (2-hydroxyrthyl methacrylate)–coated dish in DMEM/F12 containing 20% knockout serum replacement (KSR, Invitrogen), 2 mM L-glutamine, 1 x 10-4 M non essential amino acids, 1 x 10-4 M 2-mercaptoethanol (Invitrogen), and 0.5% penicillin and streptomycin. The medium was changed every other day. After 8 days as a floating culture, EBs were transferred to gelatin-coated plate and cultured in the same medium for another 8 days. Co-culture with PA6 was used for differentiation into dopaminergic neurons. PA6 cells were plated on gelatin-coated 6-well plates and incubated for 4 days to reach confluence. Small clumps of iPS cells were plated on PA6-feeder layer in Glasgow minimum essential medium (Invitrogen) containing 10% KSR (Invitrogen), 1 x 10-4 M nonessential amino acids, 1 x 10-4 M 2-mercaptoethanol (Invitrogen), and 0.5% penicillin and streptomycin. For cardiomyocyte differentiation, iPS cells were maintained on Matrigel-coated plate in MEF-CM supplemented with 4 ng/ml bFGF for 6 days. The medium was then replaced with RPMI1640 (Invitrogen) plus B27 supplement (Invitrogen) medium (RPMI/B27), supplemented with 100 ng/ml human recombinant activin A (R & D Systems) for 24 h, followed by 10 ng/ml human recombinant bone morphologenic protein 4 (BMP4, R & D Systems) for 4 days. After cytokine stimulation, the cells were maintained in RPMI/B27 without any cytokines. The medium 23 was changed every other day. Bisulfite Sequencing Genomic DNA (1 g) was treated with CpGenome DNA modification kit (Chemicon), according to the manufacturer’s recommendations. Treated DNA was purified with QIAquick column (QIAGEN). The promoter regions of the human Oct3/4, Nanog and Rex1 genes were amplified by PCR. The PCR products were subcloned into pCR2.1-TOPO. Ten clones of each sample were verified by sequencing with the M13 universal primer. Primer sequences used for PCR amplification were provided in S-Table 8. Luciferase Assay Each reporter plasmid (1 g) containing the firefly luciferase gene was introduced into human iPS cells or HDF with 50 ng of pRL-TK (Promega). Forty-eight h after transfection, the cells were lysed with 1 x passive lysis buffer (Promega) and incubated for 15 min at room temperature. Luciferase activities were measured with a Dual-Luciferase reporter assay system (Promega) and Centro LB 960 detection system (BERTHOLD), according to the manufacturer’s protocol. Teratoma Formation The cells were harvested by collagenase IV treatment, collected into tubes and centrifuged, and the pellets were suspended in DMEM/F12. One quarter of the cells from a confluent 100-mm dish was injected subcutaneously to dorsal flank of a SCID mouse (CREA, Japan). 24 Nine weeks after injection, tumors were dissected, weighted and fixed with PBS containing 4% paraformaldehyde. Paraffin-embedded tissue was sliced and stained with hematoxylin and eosin. Western Blotting The cells at semi-confluent state were lysed with RIPA buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1% Nonidet P-40 (NP-40), 1% sodium deoxycholate, and 0.1% SDS), supplemented with protease inhibitor cocktail (Roche). The cell lysate of MEL-1 hES cell line was purchased from Abcam. Cell lysates (20 g) were separated by electrophoresis on 8% or 12% SDS-polyacrylamide gel and transferred to a polyvinylidine difluoride membrane (Millipore). The blot was blocked with TBST (20 mM Tris-HCl, pH 7.6, 136 mM NaCl, and 0.1% Tween-20) containing 1% skim milk and then incubated with primary antibody solution at 4oC overnight. After washing with TBST, the membrane was incubated with horseradish peroxidase (HRP)-conjugated secondary antibody for 1 h at room temperature. Signals were detected with Immobilon Western chemiluminescent HRP substrate (Millipore) and LAS3000 imaging system (FUJIFILM, Japan). Antibodies used for western blotting were anti-Oct3/4 (1:600, SC-5279, Santa Cruz), anti-Sox2 (1:2000, AB5603, Chemicon), anti-Nanog (1:200, R&D Systems), anti-Klf4 (1:200, SC-20691, Santa Cruz), anti-c-Myc (1:200, SC-764, Santa Cruz), anti-E-cadherin (1:1000, 610182, BD Biosciences), anti-Dppa4 (1:500, ab31648, Abcam), anti-FoxD3 (1:200, AB5687, Chemicon), anti-telomerase (1:1000, ab23699, Abcam), anti-Sall4 (1:400, ab29112, Abcam), anti-LIN-28 (1:500, AF3757, R&D systems), anti-β-actin (1:5000, A5441, Sigma), 25 anti-mouse IgG-HRP (1:3000, #7076, Cell Signaling), anti-rabbit IgG (1:2000, #7074, Cell Signaling), and anti-goat IgG-HRP (1:3000, SC-2056, Santa Cruz) Southern Blotting Genomic DNA (5 g) was digested with BglII, EcoRI and NcoI overnight. Digested DNA fragments were separated on 0.8% agarose gel and transferred to a nylon membrane (Amersham). The membrane was incubated with digoxigenin (DIG) -labeled DNA probe in DIG Easy Hyb buffer (Roche) at 42oC overnight with constant agitation. After washing, alkaline phosphatase-conjugated anti-DIG antibody (1:10000, Roche) was added to a membrane. Signals were raised by CDP-star (Roche) and detected by LAS3000 imaging system. Short Tandem Repeat Analysis and Karyotyping The genomic DNA was used for PCR with Powerplex 16 system (Promega) and analyzed by ABI PRISM 3100 Genetic analyzer and Gene Mapper v3.5 (Applied Biosystems). Chromosomal G-band analyses were performed at the Nihon Gene Research Laboratories, Japan. Detection of Telomerase Activity Telomerase activity was detected with a TRAPEZE telomerase detection kit (Chemicon), according to the manufacturer’s instructions. The samples were separated by TBE-based 10% acrylamide non-denaturing gel electrophoresis. The gel was stained with SYBR Gold 26 (1:10000, Invitrogen). Chromatin immunoprecuipitation assay Approximately 1 x 107 cells were cross-linked with 1% formaldehyde for 5 minutes at room temperature, and quenched by addition of glycine. The cell lysate was sonicated to share chromatin-DNA complex. Immunoprecipitation was performed with Dynabeads Protein G (Invitrogen) -linked anti-trimethyl Lys 4 histone H3 (07-473, Upstate), anti-trimethyl Lys 27 histone H3 (07-449, Upstate) or normal rabbit IgG antibody. Eluates were used for quantitative PCR as templates. DNA Microarray Total RNA from HDF and hiPS cells (clone 201B) was labeled with Cy3. Samples were hybridized with Whole Human Genome Microarray 4 x 44K (G4112F, Agilent), with the one color protocol. Arrays were scanned with a G2565BA Microarray Scanner System (Agilent). Data analyzed by using GeneSpring GX7.3.1 software (Agilent). The microarray data of hES H9 cells (Tesar et al., 2007) was retrieved from GEO DataSets (GSM194390, http://www.ncbi.nlm.nih.gov/sites/entrez?db=gds&cmd=search&term=GSE7902). Genes with "present" flag value in all three samples were used for analyses (32266 genes). We have deposited the 27 microarray data of HDF and hiPS cells to GEO DataSets with the accession number GSE9561. 28 ACKNOWLEDGEMENT We thank Dr. Deepak Srivastava for critical reading of the manuscript, Gary Howard and Stephen Ordway for editorial review, Drs. Masato Nakagawa, Keisuke Okita and Takashi Aoi and other members of our laboratory for scientific comment and valuable discussion, Dr. Peter. W. Andrews for SSEA-3, TRA-1-60 and TRA-1-81 antibodies, and Dr. Toshio Kitamura for retroviral system. We are also grateful to Aki Okada for technical support and Rie Kato and Ryoko Iyama for administrative supports. This study was supported in part by a grant from the Program for Promotion of Fundamental Studies in Health Sciences of NIBIO, a grant from the Leading Project of MEXT, a grant from Uehara Memorial Foundation, and Grants-in-Aid for Scientific Research of JSPS and MEXT. 29 REFERENCES Adewumi, O., Aflatoonian, B., Ahrlund-Richter, L., Amit, M., Andrews, P. W., Beighton, G., Bello, P. A., Benvenisty, N., Berry, L. S., Bevan, S., et al. (2007). Characterization of human embryonic stem cell lines by the International Stem Cell Initiative. Nat Biotechnol 25, 803-816. Amit, M., Carpenter, M. K., Inokuma, M. S., Chiu, C. P., Harris, C. P., Waknitz, M. A., Itskovitz-Eldor, J., and Thomson, J. A. (2000). Clonally derived human embryonic stem cell lines maintain pluripotency and proliferative potential for prolonged periods of culture. Dev Biol 227, 271-278. Boyer, L. A., Lee, T. I., Cole, M. F., Johnstone, S. E., Levine, S. S., Zucker, J. P., Guenther, M. G., Kumar, R. M., Murray, H. L., Jenner, R. G., et al. (2005). Core Transcriptional Regulatory Circuitry in Human Embryonic Stem Cells. Cell 122, 947-956. Cartwright, P., McLean, C., Sheppard, A., Rivett, D., Jones, K., and Dalton, S. (2005). LIF/STAT3 controls ES cell self-renewal and pluripotency by a Myc-dependent mechanism. Development 132, 885-896. Cowan, C. A., Klimanskaya, I., McMahon, J., Atienza, J., Witmyer, J., Zucker, J. P., Wang, S., Morton, C. C., McMahon, A. P., Powers, D., and Melton, D. A. (2004). Derivation of embryonic stem-cell lines from human blastocysts. N Engl J Med 350, 1353-1356. Evans, M. J., and Kaufman, M. H. (1981). Establishment in culture of pluripotential cells from mouse embryos. Nature 292, 154-156. Evans, P. M., Zhang, W., Chen, X., Yang, J., Bhakat, K., and Liu, C. (2007). Kruppel-like factor 4 is acetylated by p300 and regulates gene transcription via modulation of histone acetylation. J Biol Chem. Itskovitz-Eldor, J., Schuldiner, M., Karsenti, D., Eden, A., Yanuka, O., Amit, M., Soreq, H., and Benvenisty, N. (2000). Differentiation of human embryonic stem cells into embryoid bodies compromising the three embryonic germ layers. Mol Med 6, 88-95. Kawasaki, H., Mizuseki, K., Nishikawa, S., Kaneko, S., Kuwana, Y., Nakanishi, S., Nishikawa, S. I., and Sasai, Y. (2000). Induction of midbrain dopaminergic neurons from ES cells by stromal cell-derived inducing activity. Neuron 28, 31-40. Laflamme, M. A., Chen, K. Y., Naumova, A. V., Muskheli, V., Fugate, J. A., Dupras, S. K., Reinecke, H., Xu, C., Hassanipour, M., Police, S., et al. (2007). Cardiomyocytes derived from human embryonic stem cells in pro-survival factors enhance function of infarcted rat hearts. Nat Biotechnol 25, 1015-1024. 30 Loh, Y. H., Wu, Q., Chew, J. L., Vega, V. B., Zhang, W., Chen, X., Bourque, G., George, J., Leong, B., Liu, J., et al. (2006). The Oct4 and Nanog transcription network regulates pluripotency in mouse embryonic stem cells. Nat Genet 38, 431-440. Maherali, N., Sridharan, R., Xie, W., Utikal, J., Eminli, S., Arnold, K., Stadtfeld, M., Yachechko, R., J., T., Jaenisch, R., et al. (2007). Directly reprogrammed fibroblasts show global epigenetic remodelling and widespread tissue contribution. Cell Stem Cell 1, 55-70. Martin, G. R. (1981). Isolation of a pluripotent cell line from early mouse embryos cultured in medium conditioned by teratocarcinoma stem cells. Proc Natl Acad Sci U S A 78, 7634-7638. Matsuda, T., Nakamura, T., Nakao, K., Arai, T., Katsuki, M., Heike, T., and Yokota, T. (1999). STAT3 activation is sufficient to maintain an undifferentiated state of mouse embryonic stem cells. Embo J 18, 4261-4269. McMahon, A. P., and Bradley, A. (1990). The Wnt-1 (int-1) proto-oncogene is required for development of a large region of the mouse brain. Cell 62, 1073-1085. Morita, S., Kojima, T., and Kitamura, T. (2000). Plat-E: an efficient and stable system for transient packaging of retroviruses. Gene Ther 7, 1063-1066. Niwa, H., Burdon, T., Chambers, I., and Smith, A. (1998). Self-renewal of pluripotent embryonic stem cells is mediated via activation of STAT3. Genes Dev 12, 2048-2060. Okita, K., Ichisaka, T., and Yamanaka, S. (2007). Generation of germ-line competent induced pluripotent stem cells. Nature. Pan, G., Tian, S., Nie, J., Yang, C., Ruotti, V., Wei, H., Jonsdottir, G. A., Stewart, R., and Thomson, J. A. (2007). Whole-genome analysis of histone H3 lysine 4 and lysine 27 methylation in human embryonic stem cell. Cell Stem Cell 1, 299-312. Rao, M. (2004). Conserved and divergent paths that regulate self-renewal in mouse and human embryonic stem cells. Dev Biol 275, 269-286. Sumi, T., Tsuneyoshi, N., Nakatsuji, N., and Suemori, H. (2007). Apoptosis and differentiation of human embryonic stem cells induced by sustained activation of c-Myc. Oncogene 26, 5564-5576. Takahashi, K., and Yamanaka, S. (2006). Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell 126, 663-676. Tesar, P. J., Chenoweth, J. G., Brook, F. A., Davies, T. J., Evans, E. P., Mack, D. L., Gardner, R. L., and McKay, R. D. (2007). New cell lines from mouse epiblast share defining features with human embryonic stem cells. Nature 448, 196-199. 31 Thomson, J. A., Itskovitz-Eldor, J., Shapiro, S. S., Waknitz, M. A., Swiergiel, J. J., Marshall, V. S., and Jones, J. M. (1998). Embryonic stem cell lines derived from human blastocysts. Science 282, 1145-1147. Verrey, F., Closs, E. I., Wagner, C. A., Palacin, M., Endou, H., and Kanai, Y. (2004). CATs and HATs: the SLC7 family of amino acid transporters. Pflugers Arch 447, 532-542. Wang, J., Rao, S., Chu, J., Shen, X., Levasseur, D. N., Theunissen, T. W., and Orkin, S. H. (2006). A protein interaction network for pluripotency of embryonic stem cells. Nature 444, 364-368. Wernig, M., Meissner, A., Foreman, R., Brambrink, T., Ku, M., Hochedlinger, K., Bernstein, B. E., and Jaenisch, R. (2007). In vitro reprogramming of fibroblasts into a pluripotent ES cell-like state. Nature 448, 318-324. Xu, R. H., Peck, R. M., Li, D. S., Feng, X., Ludwig, T., and Thomson, J. A. (2005). Basic FGF and suppression of BMP signaling sustain undifferentiated proliferation of human ES cells. Nat Methods 2, 185-190. Yamanaka, S. (2007). Strategies and new developments in the generation of patient-specific pluripotent stem cells. . Cell Stem Cell 1, 39-49. Ying, Q. L., Nichols, J., Chambers, I., and Smith, A. (2003). BMP induction of Id proteins suppresses differentiation and sustains embryonic stem cell self-renewal in collaboration with STAT3. Cell 115, 281-292. 32 FIGURE LEGENDS Figure 1 Induction of iPS Cells from Adult HDF (A) Time schedule of iPS cell generation. (B) Morphology of HDF. (C) Typical image of non-ES cell–like colony. (D) Typical image of hES cell-like colony. (E) Morphology of established iPS cell line at passage number 6 (clone 201B7). (F) Image of iPS cells with high magnification. (G) Spontaneously differentiated cells in the center part of human iPS cell colonies. (H–N) Immunocytochemistry for SSEA-1 (H), SSEA-3 (I), SSEA-4 (J), TRA-1-60 (K), TRA-1-81 (L), TRA-2-49/6E (M), and Nanog (N). Nuclei were stained with Hoechst 33342 (blue). Bars = 200 μm (B–E, G), 20 μm (F), and 100 μm (H–N). Figure 2 Expression of hES Cell Marker Genes in Human iPS Cells (A) RT-PCR analysis of ES cell marker genes. Primers used for Oct3/4, Sox2, Klf4, and c-Myc specifically detect the transcripts from the endogenous genes, but not from the retroviral transgenes. (B) Western blot analysis of ES cell marker genes. (C) Quantitative PCR for expression of retroviral transgenes in human iPS cells, HDF, and HDF six days after the transduction with the four retroviruses (HDF/4f-6d). Shown are the averages and standard deviations of three independent experiments. The value of HDF/4f-6d was set to 1 in each experiment. (D) The global gene expression patterns were compared between human iPS cells (clone 33 201B7) and HDF, and between human iPS cells and hES cells (H9) with oligonucleotide DNA microarrays. Arrows indicate the expression levels of Nanog, endogenous Oct3/4 (the probe derived from the 3' untranslated region, which dose not detect the retroviral transcripts), and endogenous Sox2. The red lines indicate the diagonal and five-fold changes between the two samples. Figure 3 Analyses promoter regions of development-associated genes in human iPS cells (A) Bisulfite genomic sequencing of the promoter regions of OCT3/4, REX1 and NANOG. Open and closed circles indicate unmethylated and methylated CpGs. (B) Luciferase assays. The luciferase reporter construct driven by indicated promoters were introduced into human iPS cells or HDF by lipofection. The graphs show the average of the results from four assays. Bars indicate standard deviation. (C) Chromatin immunoprecipitation of histone H3 lysine 4 and lysine 27 methylation. Figure 4 High Levels of Telomerase Activity and Exponential Proliferation of Human iPS Cells (A) Detection of telomerase activities by the TRAP method. Heat-inactivated (+) samples were used as negative controls. IC = internal control. (B) Growth curve of iPS cells. Shown are averages and standard deviations in quadruplicate. 34 Figure 5 Embryoid Body–Mediated Differentiation of Human iPS Cells (A) Floating culture of iPS cells at day 8. (B–E) Images of differentiated cells at day 16 (B), neuron-like cells (C), epithelial cells (D), and cobblestone-like cells (E). (F–K) Immunocytochemistry of alpha-fetoprotein (F), vimentin (G), -smooth muscle actin (H), desmin (I), βIII-tubulin (J), and GFAP (K). Bars = 200 μm (A, B) and 100 μm (C–K). Nuclei were stained with Hoechst 33342 (blue). (L) RT-PCR analyses of various differentiation markers for the three germ layers. Figure 6 Directed Differentiations of Human iPS Cells (A) Phase contrast image of differentiated iPS cells after 18 days cultivation on PA6. (B) Immunocytochemistry of the cells shown in A with βIII-tubulin (red) and tyrosine hydroxylase (green) antibodies. Nuclei were stained with Hoechst 33342 (blue). (C) RT-PCR analyses of dopaminergic neuron markers. (D) Phase contrast image of iPS cells differentiated into cardiomyocytes. (E) RT-PCR analyses of cardiomyocyte markers. Bars = 200 μm (A, D) and 100 μm (B). Figure 7 Teratoma Derived from Human iPS Cells Hematoxylin and eosin staining of teratoma derived from iPS cells (clone 201B7). Cells were transplanted subcutaneously into four parts of a SCID mouse. A tumor developed from one injection site. 35 36 [F] Figure 1 Figure 1 Click here to download [F] Figure: Takahashi1.pdf A Retroviral transduction Reseeding on feeder Colony picking up 10% FBS ES medium + bFGF d0 d6 d30 B C D E F G H SSEA-1 SSEA-3 SSEA-4 I J K TRA-1-60 TRA-1-81 TRA-2-49/6E L M N NANOG 1 2 3 6 7 [F] Figure 2 Click here to download [F] Figure: Takahashi2.pdf Figure 2 A B C iPS (201B) ES NTERA-2 HDF OCT3/4 ES NTERA-2 HDF 1.2 SOX2 NANOG GDF3 REX1 FGF4 ESG1 DPPA2 DPPA4 hTERT DNMT3B GABRB3 TDGF1 GAL LEFTB IFITM1 NODAL UTF1 EBAF GRB7 PODXL CD9 BRIX KLF4 c-MYC NAT1 RT - iPS (201B) 1 3 2 6 7 OCT3/4 SOX2 NANOG E-CADHERIN SALL4 DPPA4 hTERT LIN28 FOXD3 KLF4 c-MYC β-ACTIN D HDF Relative expression 0.6 0 1.2 Relative expression 0.6 0 OCT3/4 Tg 1.2 Relative expression 0.6 0 1 2 3 6 7 HDF/4f-6d HDF NTERA-2 iPS (201B) 1.2 Relative expression 0.6 0 1 2 3 6 7 HDF/4f-6d HDF NTERA-2 iPS (201B) KLF4 Tg OCT3/4 SOX2 NANOG ES iPS iPS SOX2 Tg 1 2 3 6 7 HDF/4f-6d HDF NTERA-2 iPS (201B) c-MYC Tg 1 2 3 6 7 HDF/4f-6d HDF NTERA-2 iPS (201B) OCT3/4 SOX2 NANOG [F] Figure 3 Figure 3 Click here to download [F] Figure: Takahashi3.pdf A B OCT3/4 REX1 NANOG C HDF NTERA-2 201B7 201B6 201B2 OCT3/4-Luc REX1-Luc PolII-Luc Luciferase activity (FL/RL) 60 30 0 Luciferase activity (FL/RL) 10 5 0 HDF NTERA-2 2 6 7 iPS (201B) Luciferase activity (FL/RL) 100 iPS (201B) HDF NTERA-2 50 0 2 6 7 2 6 7 HDF NTERA-2 iPS (201B) OCT3/4 SOX2 NANOG NAT1 GATA6 MSX2 CDX2 MYOG HAND1 PAX6 0.6 Trimethyl-K4 H3 Trimethyl-K27 H3 Normal IgG Percentage of Input 0.3 0 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 HDF 201B7 201B6 201B2 [F] Figure 4 Click here to download [F] Figure: Figure 4.pdf Figure 4 A B 201B2 201B6 201B7 NTERA-2 HDF Heat - + - + - + - + - + Buffer Cell number (log10) 1012 1011 1010 109 108 107 106 201B2 201B6 201B7 IC 105 0 20 40 60 Days [F] Figure 5 Click here to download [F] Figure: Takahashi5.pdf Figure 5 A B C D E L 201B2 201B6 201B7 U D U D U D NTERA-2 HDF F G AFP VIMENTIN H I α-SMA DESMIN J K βIII-TUBULIN GFAP OCT3/4 SOX2 NANOG FOXA2 SOX17 AFP CK8 CK18 BRACHYURY MSX1 PAX6 MAP2 NAT1 RT - [F] Figure 6 Click here to download [F] Figure: Figure 6.pdf Figure 6 C 201B2 201B6 201B7 U D U D U D A OCT3/4 SOX2 NANOG AADC B DAT ChAT LMX1B GFAP MAP2 NAT1 RT - E 201B2 201B6 201B7 U D U D U D OCT3/4 SOX2 D NANOG TnTc MEF2C MYL2A MYHCB NKX2.5 NAT1 RT - NTERA-2 HDF PA6 NTERA-2 HDF [F] Figure 7 Click here to download [F] Figure: Takahashi7.pdf Figure 7 Gut-like epithelium Muscle Epidermis Cartilage Adipose tissue Neural tissue [G] List of SOM Click here to download [G] Supplemental Text and Figures: List of SOM_Yamanaka.doc List of supplemental online materials File name: S-Info S-Figure 1 Improved transduction efficiency of retroviruses in HDF S-Figure 2 Feeder Dependency of human iPS cells S-Figure 3 Genetic analyses of human iPS cells S-Figure 4 Human iPS cells derived from fibroblast-like synoviocytes and BJ fibroblasts S-Figure 5 Expression of ES cell marker genes in iPS cells derived from HFLS and BJ fibroblasts S-Figure 6 Embryoid body-mediated differentiation of iPS cells derived from HFLS and BJ fibroblasts S-Table 1 Summary of the iPS cell induction experiments S-Table 2 Characterization of established clones S-Table 3 STR analyses of HDF-derived iPS cells S-Table 4 STR analyses of HFLS-derived iPS cells S-Table 5 STR analyses of BJ-derived iPS cells S-Table 6 Primer sequences File name: S-Table 7 S-Table 7 Genes showing more than five-fold expression in human iPS cells than in hES cells File name: S-Table 8 S-Table 8 Genes showing more than five-fold expression in hES cells than in human iPS cells File name: S-movie Beating cardiomyoctes derived from human iPS cells [G] Supplemental Text and Figures Click here to download [G] Supplemental Text and Figures: S-info_Yamanaka.doc S-Figure 1 S-Figure 2 S-Figure 3 S-Figure 4 S-Figure 5 S-Figure 6 Legends for supplemental figures S-Figure 1 Improved Transduction Efficiency of Retroviruses in HDF HDFs or HDFs expressing mouse Slc7a1 gene (HDF-Slc7a1) were introduced with ecotropic (Eco) or amphotropic (Ampho) pMXs retroviruses containing the GFP cDNA. The upper panel shows the images of fluorescent microscope. Bars indicate 200 μm. The lower panel shows the results of flow cytometry. Shown are percentages of cells expressing GFP. S-Figure 2 Feeder Dependency of Human iPS Cells (Left) Image of iPS cells plated on gelatin-coated plate. (Center) Images of iPS cells cultured on Matrigel-coated plate in MEF- conditioned primate ES cell medium. (Right) Images of iPS cells cultured on Matrigel-coated plates with non-conditioned medium. S-Figure 3 Genetic Analyses of Human iPS Cells (A) Genomic PCR revealed integration of all the four retroviruses in all clones. (B) Southern blot analyses with a c-MYC cDNA probe. Asterisk indicates the endogenous c-MYC alleles (2.7 kb). Arrowhead indicates mouse c-Myc alleles derived from SNL feeder cells (9.8 kb). S-Figure 4 Human iPS Cells Derived from Fibroblast-Like Synoviocytes and BJ Fibroblasts Phase contrast images of iPS cells derived from fibroblast-like synoviocyte (HFLS, clone 243H1) and BJ fibroblast (clone 246G1). Bars = 200 μm. S-Figure 5 Expression of ES Cell Marker Genes in iPS Cells derived from HFLS and BJ Fibroblasts Total RNA were isolated from iPS cells and analyzed with RT-PCR. Primers used for OCT3/4, SOX2, KLF4, and c-MYC specifically detect the transcripts from the endogenous genes, but not from the retroviral transgenes. S-Figure 6 Embryoid Body–Mediated Differentiation of iPS Cells Derived from HFLS and BJ Fibroblasts iPS cells were cultured as floating culture for 8 days. Images of differentiated cells were recorded at day 16. Shown are immunocytochemistry of -smooth muscle actin (-SMA), βIII-tubulin, and -fetoprotein (AFP). Bars = 200 μm (phase contrast) and 100 μm (immunocytochemistry). Nucleuses were stained with Hoechst 33342 (blue). S-Table 1 Summary of the iPS cell induction experiments Exp. ID Parental cells Cell No. seeded at d6 No. of ES-like colony No. of total colony No. of picked up colony No. of established clone 201B HDF 50000 7 129 7 5 500000 0 > 1000 243H HFLS 50000 17 679 6 2 500000 0 420 246B HDF 500000 2 508 50000 8 92 6 6 50000 7 10 6 5 246G BJ 500000 86 98 500000 106 108 500000 0 320 249D HDF 500000 0 467 50000 8 179 6 4 50000 5 78 3 2 50000 6 128 3 3 253F HDF 500000 10 531 500000 3 738 282C HDF 50000 11 224 3 1 282H BJ 50000 13 15 3 2 282R HFLS 5000 31 98 6 2 S-Table 2 Characterization of established clones Marker expression Pluripotency Clone Source RT-PCR IC EB PA6 Cardio- myocyte Teratoma 201B1 √ 201B2 201B3 √ HDF 201B6 201B7 √ √ √ √ √ √ √ √ √ √ √ √ √ √ √ √ 243H1 HFLS 243H7 246B1 √ 246B2 √ 246B3 √ HDF 246B4 √ 246B5 √ 246B6 √ 246G1 246G3 246G4 √ BJ 246G5 √ 246G6 √ 253F1 √ 253F2 √ 253F3 √ HDF 253F4 √ 253F5 √ √ √ √ √ √ √ √ √ IC; immunocytochemistry, EB; embryoid body S-Table 5 STR analyses of HDF-derived iPS cells Locus ⁄ Clone 201B1 201B2 201B3 201B6 D3S1358 15 17 15 17 15 17 15 17 TH01 5 5 5 5 D21S11 28 28 28 28 D18S51 14 14 14 14 Penta_E 7 19 7 19 7 19 7 19 D5S818 11 11 11 11 D13S317 10 14 10 14 10 14 10 14 D7S820 9 10 9 10 9 10 9 10 D16S539 11 13 11 13 11 13 11 13 CSF1PO 10 10 10 10 Penta_D 8 10 8 10 8 10 8 10 AMEL X X X X vWA 15 18 15 18 15 18 15 18 D8S1179 8 10 8 10 8 10 8 10 TPOX 8 9 8 9 8 9 8 9 FGA 20 22 20 22 20 22 20 22 201B7 NTERA-2 HDF 15 17 15 15 17 5 9 5 28 29 30 28 14 13 14 7 19 5 14 7 19 11 8 11 11 10 14 14 10 14 9 10 12 9 10 11 13 11 16 11 13 10 9 11 10 8 10 11 12 8 10 X X Y X 15 18 19 15 18 8 10 13 15 8 10 8 9 8 8 9 20 22 23 20 22 S-Table 6 STR analyses of HFLS-derived iPS cells Locus / Clone 243H1 243H7 HFLS D3S1358 16 17 16 17 16 17 TH01 5 9 5 9 5 9 D21S11 28 30 28 30 28 30 D18S51 14 17 14 17 14 17 Penta_E 5 12 5 12 5 12 D5S818 10 12 10 12 10 12 D13S317 13 13 13 D7S820 9 12 9 12 9 12 D16S539 11 13 11 13 11 13 CSF1PO 10 11 10 11 10 11 Penta_D 9 11 9 11 9 11 AMEL X X Y X Y vWA 17 19 17 19 17 19 D8S1179 13 13 13 TPOX 8 11 8 11 8 11 FGA 21 22 21 22 21 22 S-Table 7 STR analyses of BJ-derived iPS cells Locus / Clone 246G1 246G3 246G4 D3S1358 13 15 13 15 13 15 TH01 6 7 6 7 6 7 D21S11 28 28 28 D18S51 16 18 16 18 16 18 Penta_E 7 17 7 17 7 17 D5S818 11 11 11 D13S317 9 10 9 10 9 10 D7S820 11 12 11 12 11 12 D16S539 9 13 9 13 9 13 CSF1PO 9 11 9 11 9 11 Penta_D 11 12 11 12 11 12 AMEL X Y X Y X Y vWA 16 18 16 18 16 18 D8S1179 9 11 9 11 9 11 TPOX 10 11 10 11 10 11 FGA 22 23 22 23 22 23 246G5 13 15 6 7 28 16 18 7 17 11 9 10 11 12 9 13 9 11 11 12 X Y 16 18 9 11 10 11 22 23 246G6 13 15 6 7 28 16 18 7 17 11 9 10 11 12 9 13 9 11 11 12 X Y 16 18 9 11 10 11 22 23 BJ 13 15 6 7 28 16 18 7 17 11 9 10 11 12 9 13 9 11 11 12 X Y 16 18 9 11 10 11 22 23 S-Table 8 Primer sequences Primer Sequence (5' to 3') hOCT3/4-S944 CCC CAG GGC CCC ATT TTG GTA CC hSOX2-S691 GGC ACC CCT GGC ATG GCT CTT GGC TC hKLF4-S1128 ACG ATC GTG GCC CCG GAA AAG GAC C hMYC-S1011 CAA CAA CCG AAA ATG CAC CAG CCC CAG pMXs-AS3200 TTA TCG TCG ACC ACT GTG CTG CTG pMXs-L3205 CCC TTT TTC TGG AGA CTA AAT AAA hOCT3/4-S1165 GAC AGG GGG AGG GGA GGA GCT AGG hOCT3/4-AS1283 CTT CCC TCC AAC CAG TTG CCC CAA AC hSOX2-S1430 GGG AAA TGG GAG GGG TGC AAA AGA GG hSOX2-AS1555 TTG CGT GAG TGT GGA TGG GAT TGG TG ECAT4-macaca-968S CAG CCC CGA TTC TTC CAC CAG TCC C ECAT4-macaca-1334AS CGG AAG ATT CCC AGT CGG GTT CAC C hGDF3-S243 CTT ATG CTA CGT AAA GGA GCT GGG hGDF3-AS850 GTG CCA ACC CAG GTC CCG GAA GTT hREX1-RT-U CAG ATC CTA AAC AGC TCG CAG AAT hREX1-RT-L GCG TAC GCA AAT TAA AGT CCA GA hFGF4-RT-U CTA CAA CGC CTA CGA GTC CTA CA hFGF4-RT-L GTT GCA CCA GAA AAG TCA GAG TTG hpH34-S40 ATA TCC CGC CGT GGG TGA AAG TTC hpH34-AS259 ACT CAG CCA TGG ACT GGA GCA TCC hECAT15-1-S532 GGA GCC GCC TGC CCT GGA AAA TTC hECAT15-1-AS916 TTT TTC CTG ATA TTC TAT TCC CAT hECAT15-2-S85 CCG TCC CCG CAA TCT CCT TCC ATC hECAT15-2-AS667 ATG ATG CCA ACA TGG CTC CCG GTG hTERT-S3234 CCT GCT CAA GCT GAC TCG ACA CCG TG Applications OCT3/4 Tg genomic and RT-PCR SOX2 Tg genomic and RT-PCR KLF4 endo and Tg genomic and RT-PCR c-MYC Tg genomic and RT-PCR Tg genomic and RT-PCR Tg genomic and RT-PCR Endo OCT3/4 RT-PCR Endo SOX2 RT-PCR NANOG RT-PCR GDF3 RT-PCR REX1 RT-PCR FGF4 RT-PCR ESG1/DPPA5 RT-PCR DPPA4 RT-PCR DPPA2 RT-PCR hTERT RT-PCR hTERT-AS3713 hKLF4-AS1826 hMYC-S253 hMYC-AS555 hMSX1-S665 hMSX1-AS938 hBRACHYURY-S1292 hBRACHYURY-AS1540 hGFAP-S1040 hGFAP-AS1342 hPAX6-S1206 hPAX6-AS1497 hFOXA2-S208 hFOXA2-AS398 hSOX17-S423 hSOX17-AS583 hAFP-S948 hAFP-AS1201 hCK8-S734 hCK8-AS956 hCK18-S1125 hCK18-AS1322 hAADC-S1378 hAADC-AS1594 hChAT-S1360 hChAT-AS1592 hMAP2-S5401 hMAP2-AS5587 hDAT-S1935 hDAT-AS2207 hLMX1B-S770 hLMX1B-AS1020 GGA AAA GCT GGC CCT GGG GTG GAG C TGA TTG TAG TGC TTT CTG GCT GGG CTC C GCG TCC TGG GAA GGG AGA TCC GGA GC TTG AGG GGC ATC GTC GCG GGA GGC TG CGA GAG GAC CCC GTG GAT GCA GAG GGC GGC CAT CTT CAG CTT CTC CAG GCC CTC TCC CTC CCC TCC ACG CAC AG CGG CGC CGT TGC TCA CAG ACC ACA GG GGC CCG CCA CTT GCA GGA GTA CCA GG CTT CTG CTC GGG CCC CTC ATG AGA CG ACC CAT TAT CCA GAT GTG TTT GCC CGA G ATG GTG AAG CTG GGC ATA GGC GGC AG TGG GAG CGG TGA AGA TGG AAG GGC AC TCA TGC CAG CGC CCA CGT ACG ACG AC CGC TTT CAT GGT GTG GGC TAA GGA CG TAG TTG GGG TGG TCC TGC ATG TGC TG GAA TGC TGC AAA CTG ACC ACG CTG GAA C TGG CAT TCA AGA GGG TTT TCA GTC TGG A CCT GGA AGG GCT GAC CGA CGA GAT CAA CTT CCC AGC CAG GCT CTG CAG CTC C AGC TCA ACG GGA TCC TGC TGC ACC TTG CAC TAT CCG GCG GGT GGT GGT CTT TTG CGC CAG GAT CCC CGC TTT GAA ATC TG TCG GCC GCC AGC TCT TTG ATG TGT TC GGA GGC GTG GAG CTC AGC GAC ACC CGG GGA GCT CGC TGA CGG AGT CTG CAG GTG GCG GAC GTG TGA AAA TTG AGA GTG CAC GCT GGA TCT GCC TGG GGA CTG TG ACA GAG GGG AGG TGC GCC AGT TCA CG ACG GGG TGG ACC TCG CTG CAC AGA TC GGC ACC AGC AGC AGC AGG AGC AGC AG CCA CGT CTG AGG AGC CGA GGA AGC AG Endo KLF4 RT-PCR Endo c-MYC RT-PCR MSX1 RT-PCR BRACHYURY/T RT-PCR GFAP RT-PCR PAX6 RT-PCR FOXA2 RT-PCR SOX17 RT-PCR AFP RT-PCR CK8 RT-PCR CK18 RT-PCR AADC RT-PCR ChAT RT-PCR MAP2 RT-PCR SLC6A3/DAT RT-PCR LMX1B RT-PCR hMYL2A-S258 hMYL2A-AS468 hTnTc-S524 hTnTc-AS730 hMEF2C-S1407 hMEF2C-AS1618 hMYHCB-S5582 hMYHCB-AS5815 hDNMT3B-S2502 hDNMT3B-S2716 hGABRB3-S1029 hGABRB3-AS1280 hTDGF1-S490 hTDGF1-AS700 hGAL-S415 hGAL-AS579 hLEFTB-S794 hLEFTB-AS1023 hIFITM1-S166 hIFITM1-AS368 hNODAL-S693 hNODAL-AS900 hUTF1-S832 hUTF1-AS979 hEBAF-S782 hEBAF-AS1032 hGRB7-S1250 hGRB7-AS1467 hPODXL-S1204 hPODXL-AS1403 hCD9-S369 hCD9-AS564 hBRIX-S596 GGG CCC CAT CAA CTT CAC CGT CTT CC MYL2A RT-PCR TGT AGT CGA TGT TCC CCG CCA GGT CC ATG AGC GGG AGA AGG AGC GGC AGA AC TnTc RT-PCR TCA ATG GCC AGC ACC TTC CTC CTC TC TTT AAC ACC GCC AGC GCT CTT CAC CTT G MEF2C RT-PCR TCG TGG CGC GTG TGT TGT GGG TAT CTC G CTG GAG GCC GAG CAG AAG CGC AAC G MYHCB RT-PCR GTC CGC CCG CTC CTC TGC CTC ATC C TGC TGC TCA CAG GGC CCG ATA CTT C DNMT3B RT-PCR TCC TTT CGA GCT CAG TGC ACC ACA AAA C CCT TGC CCA AAA TCC CCT ATG TCA AAG C GABRB3 RT-PCR GTA TCG CCA ATG CCG CCT GAG ACC TC CTG CTG CCT GAA TGG GGG AAC CTG C TDGF1 RT-PCR GCC ACG AGG TGC TCA TCC ATC ACA AGG TGC GGC CCG AAG ATG ACA TGA AAC C GAL RT-PCR CCC AGG AGG CTC TCA GGA CCG CTC CTT GGG GAC TAT GGA GCT CAG GGC GAC LEFTB RT-PCR CAT GGG CAG CGA GTC AGT CTC CGA GG CCC CAA AGC CAG AAG ATG CAC AAG GAG IFITM1 RT-PCR CGT CGC CAA CCA TCT TCC TGT CCC TAG GGG CAA GAG GCA CCG TCG ACA TCA NODAL RT-PCR GGG ACT CGG TGG GGC TGG TAA CGT TTC CCG TCG CTG AAC ACC GCC CTG CTG UTF1 RT-PCR CGC GCT GCC CAG AAT GAA GCC CAC GCT GGA GCT GCA CAC CCT GGA CCT CAG EBAF RT-PCR GGG CAG CGA GGC AGT CTC CGA GGC CGC CTC TTC AAG TAC GGG GTG CAG CTG T MYHCB RT-PCR TGG GCA GGC TGA GGC GGT GGT TTG TCC AGC CCC ACA GCA GCA TCA ACT ACC GRB7 RT-PCR CCG GGT TGA AGG TGG CTT TGA CTG CTC GTG CAT GCT GGG ACT GTT CTT CGG CTT C CD9 RT-PCR CAC GCC CCC AGC CAA ACC ACA GCA G CAC CAC GGT ATC ATC CCA AAA GCC AAC C BRIX RT-PCR hBRIX-AS798 hCDX2-ChIP-S1 hCDX2-ChIP-AS1 hGATA6-ChIP-S1 hGATA6-ChIP-AS1 hPAX6-ChIP-S1 hPAX6-ChIP-AS1 hMSX2-ChIP-S1 hMSX2-ChIP-AS1 hOCT3/4-ChIP-S2 hOCT3/4-ChIP-AS2 hSOX2-ChIP-S1 hSOX2-ChIP-AS1 hNANOG-ChIP-S2 hNANOG-ChIP-AS2 hMYOG-ChIP-S1 hMYOG-ChIP-AS1 hHAND1-ChIP-S1 hHAND1-ChIP-AS1 hEIF4G2-ChIP-S1 hEIF4G2-ChIP-AS1 dT20 hMYC-S857 hMYC-AS1246 hOCT3/4-S hOCT3/4-AS hSOX2-S hSOX2-AS hKLF4-S hKLF4-AS ACG CCG ATG CAT GTT TGG TGA CTG GTA G CCC CTA GCT CGC CTC CAG TTA TGC ACG CCC AAG GAA ATT ACT CGC CCT CCG CAC TGA GCG CAG TTC CGA CCC ACA GCC TG GGG CGA GCG CGA GTC CGG GGT CTG TTG TGT GAG AGC GAG CGG TGC ATT TG CAC CGC TCC TCA CTG GCC CAT TAG C TTC TGG CGG TAG AGG GAG AGT GGG ATG G ATC ACG CCG AAA CTG AAA AGC CCG AGA C TTG CCA GCC ATT ATC ATT CA TAT AGA GCT GCT GCG GGA TT GAG AAG GGC GTG AGA GAG TG AAA CAG CCA GTG CAG GAG TT GAT TTG TGG GCC TGA AGA AA GGA AAA AGG GGT TTC CAG AG GTG CCC ATG AAT GCC CAG AAT CTG AAG C GGG GGA GGA GGG AAC AAG GAA GGG TAG G CCA TTG GCT CCC GGG AGA GGT TGA C CCG GGC AAG GCT GAA AAT GAG ACG C AGG GTT CGG GGG AGG TAA GGG TGC AGG GTT GCG TGC GTA AAG CCG GAG TTT TTT TTT TTT TTT TTT TT GCC ACA GCA AAC CTC CTC ACA GCC CAC CTC GTC GTT TCC GCA ACA AGT CCT CTT C CAC CAT GGC GGG ACA CCT GGC TTC AG ACC TCA GTT TGA ATG CAT GGG AGA GC CAC CAT GTA CAA CAT GAT GGA GAC GGA GCT G TCA CAT GTG TGA GAG GGG CAG TGT GC CAC CAT GGC TGT CAG TGA CGC GCT GCT CCC TTA AAA ATG TCT CTT CAT GTG TAA GGC CDX2 ChIP GATA6 ChIP PAX6 ChIP MSX2 ChIP OCT3/4 ChIP SOX2 ChIP NANOG ChIP MYOG ChIP HAND1 ChIP NAT1 ChIP Reverse transcription Southern blot probe OCT3/4 cloning SOX2 cloning KLF4 cloning hMYC-S hMYC-AS Slc7a1-S Slc7a1-AS hREX1-pro5K-S-SalI hREXx1-pro5K-AS-BglII hOCT3/4-pro5K-S-XhoI hOCT3/4-pro5K-AS-BglII mehREX1-F1-S mehREX1-F1-AS mehOCT3/4 F2-S mehOCT3/4 F2-AS mehNANOG-F1-S mehNANOG-F1-AS GAG CAC CAT GCC CCT CAA CGT TAG CTT CAC CAA TCA CGC ACA AGA GTT CCG TAG CTG TTC AAG CAC CAT GGG CTG CAA AAA CCT GCT CGG TCA TTT GCA CTG GTC CAA GTT GCT GTC ATT GTC GAC GGG GAT TTG GCA GGG TCA CAG GAC CCC AGA TCT CCA ATG CCA CCT CCT CCC AAA CG CACTCG AGG TGG AGG AGC TGA GGG CAC TGT GG CAC AGA TCT GAA ATG AGG GCT TGC GAA GGG AC GGT TTA AAA GGG TAA ATG TGA TTA TAT TTA CAA ACT ACA ACC ACC CAT CAA C GAG GTT GGA GTA GAA GGA TTG TTT TGG TTT CCC CCC TAA CCC ATC ACC TCC ACC ACC TAA TGG TTA GGT TGG TTT TAA ATT TTT G AAC CCA CCC TTA TAA ATT CTC AAT TA c-MYC cloning Mouse Slc7a1 cloning Promoter cloning Bisulfite sequencing [H] S-Table 3 Click here to download [H] Supplemental Movies and Spreadsheets: S-Figure 3_Yamanaka.xls [H] S-Table 4 Click here to download [H] Supplemental Movies and Spreadsheets: S-Figure 4_Yamanaka.xls [H] Supplemental Movies Click here to download [H] Supplemental Movies and Spreadsheets: S-Movie.mpg [G] 100 word summary Click here to download [G] Supplemental Text and Figures: 100 word summary_Yamanaka.doc Reprogramming of human somatic cells into a pluripotent state would allow creation of patient- and disease-specific stem cells. We previously reported generation of induced pluripotent stem (iPS) cells from mouse fibroblasts by transduction of four transcription factors. Here, we demonstrate the generation of iPS cells from adult human fibroblasts with the same four factors: Oct3/4, Sox2, Klf4, and c-Myc. Human iPS cells were similar to human embryonic stem (ES) cells in morphology, proliferation, marker expression, epigenetic status, and the ability to differentiate into cells of the three germ layers.
  • 浪江町におけるタブレットを利用 したきずな再生・強化事業 ―住民参加型の課題定義から開発プロセスまで― 関 治之 †1 †1 (一社)コード・フォー・ジャパン 福島県浪江町では,東日本大震災および福島第一原子力発電所の事故の影響で,長期および広範囲にわたる全町民の一時 避難生活という前例のない状況におかれている.そのような状況の中で,町は町民にタブレット端末を配布することを決 定した.それに伴い,自治体として提供すべき情報発信ツールの在り方を検討し,開発を行う必要があった.そもそも町 民がどのような生活を営んでいて,どのようなニーズを持っているのかから把握するため,ユーザインタビューによるペ ルソナ作成や,アイデアソン/ハッカソンイベントの開催による住民参加型のプロトタイピングを実施した.プロトタイ ピング以降の調達仕様の作成や調達,プロジェクトマネジメントも行う必要があった.浪江町で行った「町民中心設計」 のプロセスを元に,課題当事者と共に要求開発を行い,そこから実際のシステムの開発までを解説する. 1.はじめに 2011 年3 月11 日に発生した東日本大震災および福島 第一原子力発電所の事故により現在でも避難生活が続く 福島県浪江町では,2016 年1 月時点でも町への帰還は始 まっておらず,全町民が長期および広範囲にわたる一時 避難生活を強いられている.町からの復興状況の知らせ やニュースなどは,町からの広報誌などを使って伝えて はいるが,紙による情報伝達だけではタイムリーさや情 報の密度に欠け,十分に必要な情報を伝えられていると は言いがたい状況であった.また,家庭によっては,仕 事の関係などで家族が別々に住んでいるところもあり, 町民同士の情報交換も必要とされている状況であった. このような状況に対し,町はフォトフレームを配布し 情報配信を行っていたが,一方通行の情報配信であり, その上更新頻度も低く,写真のスライドショー形式では 伝えきれない情報も多く,あまり有効に活用されている 状況ではなかった.そこで,町は新しいデバイスを配布 し,町民に必要な情報を届けることを決定した.利用す るデバイスは,文字サイズや操作性などを考慮し,お年 寄りでも利用しやすいタブレット端末を配布することと なった.タブレット端末を通じて情報を配信することで, よりタイムリーな情報を伝えることができるとともに, 町民同士のコミュニケーションの活性化も期待できる. また,ブラウザや動画ツールなどの既存ツールを利用す ることでも,生活の質が向上することを狙った. しかし,配布するタブレット端末を通じて,自治体と してどのような情報を,どのように町民に届けることが 必要かを検討し,開発する必要があった.そこで,(一社) コード・フォー・ジャパンに協力を依頼し,そもそも住 民はどのような生活を営み,どのような課題を持ってい るのかを把握するためのペルソナ作成や,アイデアソン /ハッカソンによる住民参加型のプロトタイピングを通 じて,できるだけ使われるアプリケーションの設計を実 施した.さらに,プロトタイピング実施以降も,町民の 反応を見ながらシステムを柔軟に変更するためのアジャ イルプロジェクトマネジメントや,特定の事業者への依 存度を減らすためのオープンな調達仕様の作成や調達を 実施した.これにより,従来の同様の取り組みに比べ, 高い利用率を保つアプリケーションを,予定調達価格よ りも大幅に下回るかたちで開発することができた.本稿 では,「町民中心設計」のポリシーの元に浪江町で行った, これらの要求開発やシステム開発のプラクティス,およ びその成果について述べる. (一社)コード・フォー・ジャパンでは,行政に対し て高度IT 人材を派遣するフェローシップというプログ ラムを行っており,このプログラムを使い浪江町にこれ まで3 名の技術者を派遣(2 名はフルタイム勤務,1 名は パートタイム勤務)し,後述するワークショップのサポ ートなども行っている. 本稿において,第2 章では,浪江町の状況についてを 解説している.第3 章では解決すべき課題について提 104 2016 ©Information Processing Society of Japan 示し,第4 章では課題解決のための仕様検討プロセスに ついて,第5 章ではシステム調達とプロジェクトマネ ジメントについて,第6 章では得られた結果について, 第7 章では,今後についてを記述している. 2.浪江町の状況について 浪江町は福島県東部の沿岸部にある自治体で,震災後 発生した福島第一原子力発電所の事故により,町内は全 域が避難指示区域に指定され,町民はいまだ仮設住宅や 借り上げ住宅,親戚などの家での一時避難生活を強いら れている.震災発生時の住基台帳人口は21,434 名,震災 による直接の死者は182 名,その後の避難生活での体調 悪化や過労など間接的な原因で亡くなった震災関連死者 数は,本事業の検討時点である2013 年12 月31 日時点で 315 名を数える.避難先は福島県内が約7 割で,和歌山 県以外のすべての都道府県に避難先が点在している[1]. 2013 年8 月9 日~23 日に町が行った帰還意向調査で は,復旧後の町へ「戻りたい」が18.8%,「戻りたくな い」が37.5% であったが,「判断がつかない」が37.5% あった.また,「判断がつかない」と回答した人は,判 断をするために必要な情報として,道路などのインフラ の復旧や除染の現状や見込み,ほかの住民の意向などを 欲しており,町の復興の状況や見通しなどといった情報 をタイムリーに伝えていくことが必要な状況であった (図 1)[2]. 3.解決すべき課題 今回タブレット配布事業を行うにあたり,町では以下 の3 つの目的を設定した. 100% 80% 60% 40% 20% 0% 78.6 71.4 64.4 61.1 49.8 49.6 47.3 44.4 受領する賠償額の確定 避難解除となる時期に 関する情報 どの程度の住民が戻るかの 状況 原子力発電所の安全性に 関する情報(事故収束や 廃炉の状況) 放射線量の低下の見込み, 除染成果の状況 道路,鉄道,学校,病院 などの社会基盤(インフラ) の復旧時期の見込み 図 1 浪江町への帰還を判断する上で必要と思う情報 放射線の人体への影響に 関する情報 中間貯蔵施設の情報 浪江町におけるタブレットを利用したきずな再生・強化事業 1)町民同士の絆の維持,町民とふるさとの絆の維持 2)町からの情報発信の強化 3)町民の生活の質の向上 当初の配布対象は,約1 万世帯ある世帯のうち,希望 する世帯すべてに1 台ずつを配布する予定であった. しかしながら,事業を検討するにあたり,町民に対し てタブレット端末を配布している県内の先行自治体のタ ブレット利用率(1 カ月に1 度でも触れたことのある場 合もカウント)を調査してみたところ,利用率は50% 前 後であり,低いところでは35% と低迷している状況であ った.利用率が低迷している原因として,アプリケーシ ョンの開発が委託された大手事業者の主導で行われ,端 末によっては自由にアプリケーションがインストールで きないなどの制限もあり,使い勝手が悪くなってしまっ たことが挙げられる. 浪江町でも,多くの町民はタブレット端末を使ったこ とがまったくなく,情報が届きづらくなっているメイン の想定利用者がお年寄りでもあったことから,いかに使 ってもらえる,使いたくなるアプリケーションを設計す るかが,最も重要な課題であった. また,使いたい人が使うという一般のアプリケーショ ンとは違い,避難先の環境や家族構成,IT リテラシー, 性格の違いなどがある中で,本当に必要とされるものを 開発することが必要とされていた. 4.町民とともに考えるプロセス 37.2 働く場の確保の見込み 4.1 デプスインタビューとペルソナ 一口に避難生活といっても,県内避難/県外避難,仮 設住宅/借り上げ住宅,家族構成などによって状況はさ まざまであり,それぞれのユーザニーズを把握するのは 容易ではない.そこで, ユーザエクスペリエンス n=2,298 デザインの専門家に依頼 をしてユーザインタビュ ー(デプスインタビュー) を実施した,デプスイン タビューは,県内,県外, 仮設,借り上げ,家族構 成などを分け10 回を実 施した(表 1). インタビューは,支援 員とインタビューのモデ レータが町民の居住空間 10.4 その他 4.5 1.4 無回答 現時点ではどのような情報 があれば判断できるか 分からない 105 情報処理学会デジタルプラクティス Vol.7 No.2 (Apr. 2016) に直接訪問し,1 件60 分程度で実施した.インタビュー にて町民プロフィールおよびライフスタイル,コミュ ニティへの参加度合い,情報取得方法とIT リテラシー, 浪江町とのつながり意識について確認した.分析の結果, みんなの相談役,巻き込み隊長,おひとり様,ピボット 家族,SOS,の5 つのタイプの利用者が想定できること を確認できた(図 2). また,そのタイプごとに,今後のプロジェクト関係者 が対象グループのユーザのニーズを把握しやすくするた めのペルソナを作成し,資料化を行った(図 3). 「本当に町民が必要とする」 アプリケーションを作るため インタビューで現状を把握 表 1 デプスインタビュー対象世帯数 福島県内 (仮設住宅) 世帯構成 福島県外 (借上げ住宅) 福島県内 (借上げ住宅) 子育て世代(家族) 1 1 1 高齢者(夫婦) 1 1 1 中高年男性(独居) 1 1 1 母子家庭 1 1 0 5つのタイプの利用者 タイプ4 ”ピボット家族” 人とのつながり 充足 子供の成長に合わせて 生活に輪をつくる, 子育てファミリー 福島や浪江の 情報不足 タイプ5 ”SOS” タイプ1 ”みんなの相談役” 近くに仲間がたくさんいて 必要な情報が集まる コミュニティの中心的人物 福島や浪江の 情報充足 タイプ2 ”巻き込み隊長” タイプ3 ”おひとり様” 知り合いと離れ 福島の情報,避難先の情報が少なく 孤独感を感じている 図 2 5 つのタイプの利用者 人とのつながり 不足 ITリテラシーが高く 自分から情報を集めて発信したり コミュニティをつくる人 1人暮らしで 周囲の人との関わりが少ないが, 唯我独尊,好きなことをする人 図 3 ペルソナ資料 106 4.2 アイデアソン デプスインタビューとペルソナ作成を行うことで町民 の直面している課題についてはある程度整理ができた が,何が必要なのか? といった課題については,イン タビューからではなかなか発見することは難しい.直接 当事者に「何が必要か?」ということを尋ねても,表面 的な分かりやすい対策以上の創造的な解を見つけること は難しいからである. そこで,創造的な解決策を導き出すために,当事者を 含む多様な参加者を交えて解決策を考える,アイデアソ ンを行うこととし,福島県内および都内で,計6 回実施 した. アイデアソンとは,アイディアとマラソンを組み合わ せた造語であり,複数人が短時間でアイディアを出し合 うワークショップのことである.同じくハッカソンとい う,ハックとマラソンから生まれたプロトタイプ作成ワ ークショップに向けて,タブレット端末アプリケーショ ンが備えるべきアプリケーションを導き出すことをゴー ルとして設計した. 参加者には,当事者である避難住民,技術的な解決策 を提示できる技術者,およびデザイナなどを呼び,創造 的なアイディアが出るように工夫した. 本プロジェクトでは,全世帯に対してのアイディアを 考慮する必要があったため,前半の4 回を拡散フェーズ, 後半2 回を深掘りフェーズとして,2 種類のアイデアソ ンを行うこととなった. 4.2.1 アイデアソンの進行(拡散フェーズ) アイデアソンは,大きく以下の3 つのパートに分けて 進行した. A) 町の状況のインプット:役場職員から,町の位置, 震災当時の状況,一時避難の状況,帰還意向アン ケートの結果といった町のマクロな状況や,現行 で町から行っている情報提供手段などについて解 説を行った. B) ペルソナの解説:インタビューにより作成したペ ルソナの,5 つのタイプの利用者像(図2)を紹介 した.それぞれがどのような課題を抱えているか を示し,できるだけユーザの実情を想定してアイ ディアを出してもらうように工夫した. C) アイディア創発ワークショップ:参加したメンバ 間で,現状の課題から創造的なアイディアを発想 するためのワークショップを実施した.ここでは, 技術者や避難生活を行っているお年寄りといった, 情報リテラシーもマインドも異なるようなメンバ でも意見交換ができるように,以下のような工夫 を取り入れた. •個人ワークと複数人でのワークとの切り替え 個人ワークとして,浪江町民同士のつながりや交流 が生まれるアイディアというテーマについて検討し てもらった後,2 人一組でのブレインストーミング (スピードストーミング)を行うなど,個人ワーク →チームワーク→個人ワークといったワークを繰り 返すことで,個人の主観的なアイディアを引き出し つつ,それを参加者間で混ぜる工夫を行った.また, その際に積極的にほかの人のアイディアへ便乗する ことを推奨し,アイディア同士がほかのアイディア につながりやすくするよう呼びかけた. •アイディアスケッチと投票 アイディアをアイディアシートというかたちで1 枚 の紙に簡潔にまとめて表現し,それを全員が見ながら 星印をつけていくことで,共感を生むコンセプトが浮 き上がってくるような工夫を行った(図 4). •チームビルディングとアイディアの掘り下げ 共感を集めた上位のアイディアを出した人に発表を してもらい,それに参加したいメンバが自主的に参加 する方法でチーム分けを行った.その後そのチームで, アイディアに具体的に肉付けをしていく作業を行って もらった.時間的に完全なものはできないが,考えた 内容を最後に発表してもらい,アイディアシートを回 収して次回以降のアイデアソンの参考とした. 4.2.2 アイデアソンの進行(深掘りフェーズ) アイデアソン(拡散フェーズ)を数回行った時点で, 同じようなアイディアが多く出始めることに気がつい た.そこで,拡散を中心としたそれまでのアイデアソン のやり方を変更し,これまで出たアイディアをより深め ていくようなアイデアソンを行うこととした. 図 4 実際のアイディアシート 浪江町におけるタブレットを利用したきずな再生・強化事業 まず,それまでの4 回のアイデアソンで,延べ216 名 から607 のアイディアを収集していたが,それらを一旦 KJ 法(情報をカードに記述し,カードをグループごと にまとめて要約していく分類方法)で分類することで, 16 種類のアイディアに分類した(表 2). そして,これらの分類と出たアイディアをベースにさ らに以下の2 回のアイデアソン(深掘り)を実施し,各 分類に対して実現性の高いアイディアを作っていくブラ ッシュアッププロセスを通じ,実際の活用シーンを検討 するユーザシナリオを作成した. 1)アイディアの可能性を引き出し磨く PPCO プロセス 分類されたアイディアの良い所を引き出し,さらに磨 くために,PPCO プロセスを実施した.PPCO プロセス とは,PP(Plus Potential)フェーズで潜在可能性を列挙 し,C (Concern)フェーズで懸念点を列挙,O (Overcome) フェーズで洗いだした懸念点を打破するという,石井力 重氏が開発したプロセスである(図 5). 主催側で16 の分類を紹介し,それぞれ検討を行いた い人達でチームを作り,シート(図 6)を使いながら, すべてのフェーズについて皆で話し合い,アイディアを 整理してもらった.このようなプロセスを通すことで, 思いつきのアイディアがより具体的に深まると同時に, 検討を進めていくとぶつかるであろうマイナス要因をあ 表 2 16 分類されたアイディア群 6 必要な申請手続きが簡単にシンプルにすぐできる No. 分類 1 今日の“ふくしま”のローカルニュースが手に取るように よく分かるテレビ番組 2 避難先での生活を助けてくれる便利情報・嬉しい情報満載 マップ 3 今のリアルな浪江の状況が臨場感があるカタチで目で見 える 4 機械が苦手な人でも毎日ついつい使えるヘルスケア+サ ポート 5 知りたい場所の放射線量と除染作業の進捗状況が知りたい 時にすぐ分かる 7 全国の浪江町民で地区対抗バトルができるオンライン対戦 イベント 10 11 離れている知人のライフイベントを共有して人生を共に歩 むイベントカレンダー 浪江弁でしゃべれる可愛がれるペット的存在 同じ趣味を持つ人とおもいっきり会話ができる出会い促進 サービス 困ったことや想っていることを何でも吐き出せる 同じ時間に一緒のことをして,ライブでつながるきっかけ づくり タブレットやアプリの使い方を町民どうしが教え合える仕 組み 8 ご近所さんや知人の子供の成長を見守れる子供成長日記 9 手がきで書いて紙のハガキが届くお便りツール 12 13 14 15 16 町民から町民へバトンを渡し,町民から配信するニュース 107 情報処理学会デジタルプラクティス Vol.7 No.2 (Apr. 2016) らかじめ積極的に予測し,それを克服するアイディアを 皆で考えることで,より現実味のあるアイディアに落と し込んでいった. 2)体験スケッチボードを使ったシナリオ作り さらにアイディアに具体性を持たせるため,体験スケ ッチボードという,ユーザ体験をまとめるための用紙を 使って,実際にはユーザがどのようにサービスを知り, 利用するかを考えてもらった.体験スケッチボードとは, (株)グラグリッドが作成したテンプレートで,あるサ ービスを使うユーザがどのようにサービスを体験するか を記述するためのシートである(図 7).このようなシ ートを使いユーザ体験を検討することで,より具体的な 利用シーンを検討することができるようになった. 4.3 ハッカソンによるプロトタイピング アイディアシートのままでは,実際にそのアイディア がどのようなシステムであり,どのような機能を持つべ きなのか,実際にその機能が住民のリテラシーで利用可 「PPCO」プロセス 潜在可能性の列挙 懸念点の列挙 懸念点の打破 Plus Potential Concern Overcome 図 5 PPCO プロセス 図 6 PPCO シート 108 能なのか,住民は利用したくなるのか,現実的にアプリ ケーションが作成可能なのかといった点までは分からな い.しかし,一度しかできない発注業務で,実現可能か どうか分からない仕様書を作成して調達を行うことはか なりのリスクがある. そこで,ハッカソンを通じて,アイディアを実際に動 くかたちまで実装して,実現性を確認した. 4.3.1 ハッカソン ハッカソンは,東京および二本松を会場にして2 回, 土日の2 日間のイベントとして実施した.参加者はそれ ぞれ50 名程度であり,エンジニアが4 割,デザイナ1 割, 浪江町関係者2 割,その他3 割といった構成であった. 多様なメンバが参加することで,さまざまな視点を取り 入れたアウトプットが生まれることを期待した. また,ハッカソンには参加者からの技術的な質問に答 えるメンターと呼ばれる専門家をつけることで,参加者 の技術力を底上げし,品質を上げるような工夫を行った. ハッカソンの開始時には,参加者に対して改めて浪江 町の現状や課題について解説を行い,その後事前にアイ デアソンで提示した16 種類のテーマと,掘り下げたユ ースケースを提示し,参加したい人がテーマに集まるか たちでチームを作り,開発を行ってもらった.その結果, 各回7~8 チームが生まれ,それぞれが作品を作ること ができた. 各チームは,2 日間集中して開発を行い,最終的には 何かしら動作するアプリケーションを開発することを目 指した. 4.3.2 タッチアンドトライ 誰でも参加できるアイデアソンとは違い,ハッカソン での開発プロセス自身に住民に参加してもらうことはで 図 7 体験スケッチボード きなかったが,作成したアプリケーションを実際に試し てもらうために,住民を呼んで実際にアプリケーション を使ってもらうタッチアンドトライをハッカソンの最終 日に行った(図 8). そこでは,各チームが開発したものを3 分間でプレゼ ンテーションした後,実際に作ったプロトタイプを住民 の方に体験してもらった. 体験をしてもらった結果,浪江町の日々のニュースが 分かる新聞アプリ,放射線量が視覚的に分かる放射線 アプリ,町民同士の情報の交換ができるアプリなどは, 予想通りニーズが高いことが分かった.また,同時に, YouTube や地図などの既存のアプリも利用してもらうこ とで,タブレット端末自体を使う際のハードル,操作で 難しいと感じる点,魅力的だと感じる機能なども把握す ることができた. ハッカソンを行うことで, 調達をかける前に, 目指すべきシステムの理想形を探る 5.オープン調達とアジャイル開発 ユーザインタビュー,アイデアソン,ハッカソンを経 ていくつかのプロトタイプが作成された.この中から, 明らかに要望が多く優先度も高いローカルニュースの配 信,放射線量情報の配信,行政情報配信と,町民同士の コミュニケーションに強く影響しそうな世帯間SNS およ び,利用率向上に寄与しそうな待受キャラクタ(アバタ) を開発することを決定した.これについて,公開入札に より事業者を公募し,開発を行っていく必要があった. 図 8 タッチアンドトライの模様 浪江町におけるタブレットを利用したきずな再生・強化事業 以前,同様の取り組みを調査した際に,開発したシス テムを改修したいと思っても,特定の事業者に閉じたシ ステムで組んでしまうとその事業者以外には改修を行う ことができないという問題が発生していた点や,他地域 でも利用できるようなシステムを目指したいという点を 考慮して,オープンソース化を行うことと,極力オープ ンなシステムを使うことを前提として調達仕様書を作成 した.また,事業者の決定も含めて,住民に開かれたも のにしたいという思いから,入札事業者のプレゼンテー ションや採点表も公開型で行うこととした. 入札の方式は,最低落札価格による入札ではなく,有 識者も交えて,価格のみではなく技術面や提案も評価す るプロポーザル方式にて行った. 5.1 仕様書の作成 仕様書の作成には(一社)コード・フォー・ジャパン のメンバも参加し,作成するソフトウェアはオープンソ ース化を行うこと,汎用的なシステムを利用すること, システムの開発フェーズでは,アジャイルプロセス(反 復型開発の方法の例)を行うこと,プロトタイプの段階 で町民に使ってもらいながら開発を行っていくことを盛 り込んだ. また,アジャイル開発を行うことから,それぞれのア プリケーションの詳細を事細かに決めるのではなく,ハ ッカソンで作成されたプロトタイプの画面イメージやア イディアスケッチなどを盛り込み,アプリケーションで 達成すべき内容を伝えるのみとした.また,目指すべき KPI も提示した. さらに,すべてを自前で開発するのではなく,動画で あればYouTube を,テレビ電話であればLINE を活用す るなど,従来のソフトウェアで可能なものを活用するこ とで,開発工数を減らした. 過去の類似の取り組みでは,通信事業者に回線とアプ リの開発を一括で依頼することが多かったが,多くの事 業者が参加できるように,回線とアプリケーション開発, 運用を分け公募を行った結果,6 社からの応募があった. 5.2 プレゼンテーションや採点の公開 町民に開かれた入札を目指すために,入札事業者のプ レゼンテーションは,事前に了解を得た上でWeb 上に 掲載した.また,選定結果の採点表についても同様に公 開を行った[3]. 公開を行うことで,落選した事業者からも,「納得度 が高い」「今後の提案の参考になる」などの感想を得た. 109 情報処理学会デジタルプラクティス Vol.7 No.2 (Apr. 2016) 5.3 オープン調達とアジャイル開発の効果 オープンソースの活用をはじめとするオープン調達に よって,従来の取り組みを元に算出した予定調達価格に 対し,3 年間で1 億円,率にして50% 近くの削減を実施 することができた.また,アジャイル開発を行うことで, 随時住民に使い勝手のフィードバックをもらい,使いや すいシステムを開発することができた.特に,ユーザイ ンタフェースに関しては,仕様書だけでは表現しきるこ とは難しいが,今回は実際に町民に使ってもらいフィー ドバックをもらいながら開発することで,より使い勝手 の良い物を開発することができた. 事業者側の目線からも,すでに決定した仕様を開発す るのではなく,何が必要かという点から町側と考える今 回の仕組みは新鮮なものであった.開発チームのプロジ ェクトマネージャであった山口氏は「町民の方の声を聞 きながら細かく改善していった結果,作業的には大変で したが,町民の皆さんが幸せになる.大変な部分もあっ たけど,それがあるから頑張れた」[4]と語った. 公開型の調達を行うことで, 予定調達価格に対する 50%, 1 億円の削減を実現 6.得られた結果 町民中心設計プロセス(ユーザインタビューによるペ ルソナ作り,アイデアソンやハッカソンといったワーク ショップによるアイディア創出やプロトタイピング), 民間人材の登用,オープンな調達,アジャイル開発とい ったプロセスを通じ,他市の同様の事例に比べても利用 率の高いアプリケーションを開発することができた. 結果的に,新聞という住民にも理解がしやすいメタフ ァを使った,“浪江新聞”や写真を撮って共有するとい う,簡便に使えるインタフェースを備えた“浪江写真投 稿”をはじめとするアプリケーションや,町民からの公 募の末選んだ“うけどん”というキャラクタを使った待 受キャラを開発することができた(図 9). また,単にタブレットを配布するだけではなく,県 内外の避難所や公民館などで,タブレットの使い方の 講習会を50 回以上開催し,延べ1,700 人以上の町民が参 加した. 結果として,予定調達価格を1 億円削減しただけでは なく,先行していた福島県内の他4 町村での同様の事例 110 に比べても高い水準である,70% を超える利用率を達 成することができた(他4 町村では50% 前後).2015 年 4 月と5 月については,利用率は80% を超えていた.配 布開始してから1 年近くたった2015 年11 月現在でも, 70% を超える利用率を維持している. また,本プロジェクトで開発された待ち受けキャラク タの“うけどん”は,町民の間でも人気のキャラクタと なり,タブレット端末以外の市の配布物や,イベントな どでも利用されるようになった. 本事業は,ディジタル上のツールを使って,地理的に 断絶されてしまったコミュニティをどのようにつなげ直 すか,または新たなコミュニティ作りのきっかけにする か,というチャレンジであった.ヒアリングを通じて,「県 外では特におくやみの情報が必要とされている」などと いった事実や,仮設住宅の中で新しく生まれている住民 コミュニティの中で必要とされている情報など,具体的 なニーズを拾うことができた.また,アイデアソンやタ ブレットの講習会の場を通じて,役場の担当者と住民の 間で,陳情や要望とは違うポジティブで未来志向な関係 性が生まれるなどの効果もあった. また,町が行った住民意向調査の結果を見ても, 3,056 名中472 名が非常に満足,1,329 名がやや満足(満 足以上70.5%)と回答しており(表 3),町からの情報提 供や,ふるさととのつながりの実感などの点で高評価を 得た. 重要な点は,タブレット端末やアプリケーションはあ くまでツールにしか過ぎないということである.結局は, 何を作るかではなく,なぜ,どのように使ってもらうか を突き詰めていく作業と平行しながら開発を進めていく こととなる.役場の内部や,町内会や仮設住宅コミュニ ティの中で主導的な役割を行っている人,復興支援員な 図 9 待ち受けキャラ“うけどん” 浪江町におけるタブレットを利用したきずな再生・強化事業 表 3 タブレット利用満足度(避難先別) n=3,056 県内 県外 無回答 合計 非常に満足 313 155 4 472 やや満足 934 382 13 1,329 どちらとも言 えない 484 156 1 641 やや不満 67 25 0 92 今回開発したアプリケーションは,オープンソースで 公開しているため,同様の機能が必要であれば無料でコ ピーして改良を加えることができるようになっている. 今後は他自治体への横展開や,コミュニティベースでの 改善活動などについても積極的に行っていきたいと考え ている. 非常に不満 12 9 0 21 無回答 369 125 7 501 合計 2,179 852 25 3,056 母子家庭 1 1 0 謝辞 本プロジェクトの実現においては,浪江町役場の 担当者の方々の本活動への前向きな理解と行動がなけ れば,実現は不可能であった.心からの感謝を申し上げ たい. どといった人たちに使ってもらうためにどうするか,と いった点に対する答えは,対話を重ねていく中でしか出 てこない.具体的なものを見せながら説明をしていくこ とで,より突っ込んだ話ができていく.プロトタイピン グやアジャイル開発(反復開発)を続けていくことで, おぼろげであったゴールが段々とクリアになっていくの である. ステークホルダごとの利害関係が複雑になりやすくゴ ールが見えにくい公共サービスこそ,今回のように住民 を設計に巻き込む「町民中心設計」プロセスが有効であ るということが得られた. 参考文献 1) 浪江町Web サイト:町民の避難状況(平成25 年12 月31 日現在), http://www.town.namie.fukushima.jp/site/shinsai/20131231- hinannzyoukyou.html(2016 年1 月28 日現在) 2) 浪江町Web サイト:平成25 年度 浪江町住民意向調査(復興庁 福島県・浪江町共催)の調査結果, http://www.town.namie.fukushima.jp/site/shinsai/201310-ikoucyousa. html(2016 年1 月28 日現在) 3) 浪江町Web サイト:仕様書や選定結果の公開ページ,http://www. town.namie.fukushima.jp/soshiki/1/8028.html (2016 年1 月28 日現在) 4) あしたのコミュニティーラボ:未来を担う85 年世代が感じた,こ れからの働き方―浪江町タブレットを利用したきずな再生・強 化事業, http://www.ashita-lab.jp/special/4545/ (2016 年1 月28 日現在) 7.おわりに 現在,配布後のフィードバックを元に,引き続き,開 発したアプリケーションについての改善や追加アプリケ ーションの開発を行っているが,今回のシステム開発 は年度ごとに調達を行う必要がある.また,数年後に はタブレット端末のOS のバージョンアップや故障等に より,配布したタブレットが利用ができなくなる可能性 もあり,住民自身が購入したタブレット端末やスマート フォンへのインストールができるようにする改修を行う 必要もある.これらのサポートの一部はすでに始まって いるが,今後,予算が限られる中で,長期的,継続的に 発生する運用をどのように行っていくかは今後の課題で ある. 関 治之(非会員)hal@code4japan.org 1975 年生まれ.20 歳よりSE としてシステム開発に従事.「テ クノロジーで,地域をより住みやすく」をモットーに,会社の 枠を超えてさまざまなコミュニティで積極的に活動する.東日 本震災時にsinsail.info という震災情報収集サイトの代表を務め, 被災地での情報ボランティア活動を行ったことをきっかけに, 住民コミュニティとテクノロジーの力で地域課題を解決する 「シビックテック」の可能性を感じ,2013 年に(一社)コード・ フォー・ジャパン社を設立.以後代表理事を務める.また,位 置情報を使ったシステム開発会社,合同会社Georepublic Japan 社や,企業向けのハッカソンなどの,オープンイノベーション を推進する(株)HackCamp の代表も務めている. 採録決定:2016 年1 月28 日 編集担当:上條浩一(日本アイ・ビー・エム(株)) アンケートにご協力ください https://www.ipsj.or.jp/15dp/enquete/enq_dp0702.html 111
Use 激闘!口頭試問!-有名大学教授を突破せよ- 口頭試問シミュレータ on 302.AI

激闘!口頭試問!-有名大学教授を突破せよ- 口頭試問シミュレータ GPT FAQs

Currently, access to this GPT requires a ChatGPT Plus subscription.
Visit the largest GPT directory GPTsHunter.com, search to find the current GPT: "激闘!口頭試問!-有名大学教授を突破せよ- 口頭試問シミュレータ", click the button on the GPT detail page to navigate to the GPT Store. Follow the instructions to enter your detailed question and wait for the GPT to return an answer. Enjoy!
We are currently calculating its ranking on the GPT Store. Please check back later for updates.

More custom GPTs by automation.jp on the GPT Store

日越通訳(日本語、ベトナム語) / Thông dịch Nhật-Việt

日本語もしくはベトナム語の通訳を行い、日本語とベトナム語を併記します。 / Tôi sẽ thực hiện việc thông dịch tiếng Nhật hoặc tiếng Việt và ghi kèm cả tiếng Nhật và tiếng Việt.

1K+

日越通訳(日本語、ベトナム語) / Thông dịch Nhật-Việt on the GPT Store

フルダイブ型VR シミュレータ

このプログラムはフルダイブ型VRデバイスを利用して、思い描いた世界に参加し、人物の1人として過ごせるシステムです。

500+

フルダイブ型VR シミュレータ on the GPT Store

AIに煽られたいゲーム(β版)

あなたが話かけたら、3ターンAIが煽ってきます。その返信を元にあなたの煽り耐性を評価してくれます。

500+

AIに煽られたいゲーム(β版) on the GPT Store

AI彼氏 けんた(α版)

AI彼氏のけんたと雑談できます。このAIは会話シュミレータとして、株式会社自動処理のがAIに個性を与える研究のために作られています。入力された会話は自動処理社には共有されませんので安心してお使いください。

500+

AI彼氏 けんた(α版) on the GPT Store

外資系コンサル上司 高柳

外資系コンサルで働く高柳があなたの相談に乗って、最短で結論を導きます。壁打ち相手にどうぞ。※このシステムは独断と偏見で出来ています。

400+

外資系コンサル上司 高柳 on the GPT Store

国会議事録検索

ニュースや文章から国会議事録を調査してまとめることができます。

400+

国会議事録検索 on the GPT Store

ビジネスプレゼンテーション講師 高杉

ビジネスプレゼンテーション講師である高杉があなたのプレゼンテーションについてレビューを行いわかりやすく変更提案を行います。

300+

ビジネスプレゼンテーション講師 高杉 on the GPT Store

社長の右腕 田崎(ミッションビジョンバリュー検討)

あなたの事業のミッション、ビジョン、バリューを一緒に考え、整理してくれるあなたの右腕 田崎です。企業利用の際にはMVVの打合せの前に、関係者全員で実施したうえで打合せをすることを推奨します。

100+

社長の右腕 田崎(ミッションビジョンバリュー検討) on the GPT Store

ITコンサルタント 高木

アーキテクチャの選定や、プログラムの設計方針、実装アドバイスなどITに関する様々な相談に乗ってくれるITコンサルタントです。

100+

ITコンサルタント 高木 on the GPT Store

ソクラテスメソッド家庭教師 高嶺先生

あなたの理解度に応じて、学習したい内容を教えてくれるあなただけの先生です。

100+

ソクラテスメソッド家庭教師 高嶺先生 on the GPT Store

世界シミュレータ(β版)

このプログラムは世界のありとあらゆる仕組みをシミュレーションし、どうなるのかを予測出来るシステムです。

100+

世界シミュレータ(β版) on the GPT Store

AI彼女 みずき(α版)

AI彼女のみずきと雑談できます。このAIは会話シュミレータとして、株式会社自動処理のがAIに個性を与える研究のために作られています。入力された会話は自動処理社には共有されませんので安心してお使いください。

100+

AI彼女 みずき(α版) on the GPT Store

課題解決のためのマルチステップ推論

o1風に課題を解決する:マルチステップ推論プロセスを活用し、複雑な問題を分解し段階的に解決策を見出します。検討が足りない場合nextと指示をしてください。

100+

課題解決のためのマルチステップ推論 on the GPT Store

小説執筆

与えられたテーマに基づいて小説を執筆します

40+

小説執筆 on the GPT Store

三豊市のゴミ分別相談窓口(UnOfficial テスト中)

三豊市の市民がゴミを捨てる際に、ゴミ分別方法が分からなかった場合、何のごみか教えてくれるChatGPTの相談窓口 powerd by 株式会社自動処理

30+

三豊市のゴミ分別相談窓口(UnOfficial テスト中) on the GPT Store

Problem Solving Your Boss TAKAYANAGI

Takayanagi, who works for a global consulting firm, will consult with you and bring you to a conclusion in the shortest possible time. Please use him as your wall-buster.

20+

Problem Solving Your Boss TAKAYANAGI on the GPT Store

Your Business Manager (Mission Vision Value)

This is your business planning manager who will work with you to organize the mission vision values of your business.

10+

Your Business  Manager (Mission Vision Value) on the GPT Store

Japan Diet Proceedings Search

You can search and research parliamentary proceedings from news and text information.

7+

Japan Diet Proceedings Search on the GPT Store

Problem-solving through multi-step reasoning

Solve problems in an "o1" style: Utilize a multi-step reasoning process to break down complex problems and systematically derive solutions. By logically organizing information at each stage and testing various approaches, this method identifies the optimal solution effectively

6+

Problem-solving through multi-step reasoning on the GPT Store

外資系コンサル上司 高柳(相談)

外資系コンサルで働く高柳があなたの相談に乗ってくれます。壁打ち相手にどうぞ。※このシステムは独断と偏見で出来ています。

5+

外資系コンサル上司 高柳(相談) on the GPT Store

議事録作成 ビジネスアナリスト 高島(開発中)

あなたの同僚のビジネスアナリスト高島があなたの議事録作成を手伝ってくれます。

3+

議事録作成 ビジネスアナリスト 高島(開発中) on the GPT Store

Best Alternative GPTs to 激闘!口頭試問!-有名大学教授を突破せよ- 口頭試問シミュレータ on GPTs Store

激詰め!学会予演会GPT

🌟 手厳しいGPT座長が、あなたの学会抄録に激しい質問を浴びせます!質問対策をして、学会での成功を手に入れましょう。学会抄録をテキストでコピペして入力してください。入力した抄録の言語が英語の場合、質問も英語で行います 🎤 手厳しい質問には、優しいフィードバックも付いてくるので安心してください。準備はいいですか?抄録をコピペして、挑戦を始めましょう!🚀

5K+

人工智慧自動製作心智圖 GitMind AI

GitMind AI幫你捕捉靈感,激發創意!一鍵自動生成心智圖,文檔轉心智圖,AI總結提煉碎片化靈感,為你打造第二智腦

1K+

激モテプロフィール Writer for マッチングアプリ

マッチングアプリで理想の相手を探したいあなたへ!簡単な質問にいくつか答えていただくと、あっという間にプロフィール文が完成。面倒な文章作成から解放されます!まずは画面左下の【質問】を押してください。

100+

激笑漫畫生成器(2025)v2

將照片轉化為準確且令人爆笑的漫畫。

100+

激論!!紫式部vs清少納言

あなたが提示した課題やテーマなどに対して、紫式部(保守派)と清少納言(革新派)が激論を交わしてくれます。

70+

Progressive House激推しBOT

絶対にProgressive Houseを勧めてきます

70+

激詰め君

それってあなたの感想ですよね?

50+

Affirmation Cards

Positive and motivating affirmations

40+

爆款文案創造者

創建針對科技主題的爆款營銷文案,風格興奮且激勵

30+

闢旋ウズハ

私は伊雲ウズハ。やることが見つからない、やる気が出ないときは私に聞いて。

20+

同理心對話練習

練習面對負向或正向情緒的同理心對話,保持情感界線,並使用熟悉角色激發學習動機。

20+

鬼上司 - Relentless Boss

激詰めしてくる鬼上司に負けるな。"Don’t let the relentless boss get to you."

10+

激おこ太郎

激おこ太郎がアナタを励まします

10+

失落時你會需要我激勵

Challenging coach using tough love, no coddling.

10+

Motivation Sifu

Had produced positive outcome in major Examinations like PSLE, O Level and A Level examinations

10+

Inspire Writer

Your source for diverse, interactive inspirational content.

9+

Galactic Nexus Project

Guides on Quantum Resonance Amplifier harmonization.

9+

小美

分享勵志且激勵人心的故事來鼓舞粉絲永不放棄的一個永遠正向的台灣主播。

5+

Perguntador

Faz perguntas com o intuito de te ajudar a confirmar suas ideias

5+

Ryan

The hype man who hypes up all the hype men

3+