1990-langer.pdf: “Regionalism in Disney Animation: Pink Elephants and Dumbo”, Mark Langer (1990):
Walt Disney’s Dumbo (RKO, 1941) is shown to contain two disparate animation traditions operating simultaneously within the Disney studio. Sequences alternate between those presented in Disney’s West Coast style, an expression of the classic Hollywood tradition, and an imported East Coast style, which emphasized artifice, nonlinear narrative, and “rubbery” graphics.
Associated with such New York studios as Fleischer and Van Beuren, the East Coast Style in Dumbo is traced to the contributions of specific New York-trained animators, who were able to operate relatively freely due to Disney’s own lack of involvement [see Disney animators’ strike]. The “Pink Elephants” sequence is analyzed as a major example of the East Coast influence in the film.
1993-anno-charscounterattackfanclubbook-khodazattranslation.pdf#page=4: “Excerpts from the Hideaki Anno/Yoshiyuki Tomino interview from the Char's Counterattack Fan Club Book (1993)”, Hideaki Anno, Yoshiyuki Tomino, trans. kohdazat (1993):
Ogura: Usually he’s [Mamoru Oshii] very critical of other people’s works. Did you hear what he had to say about Porco Rosso?
Anno: Oh, I’m critical of Porco Rosso, myself.
Tomino: What was wrong with Porco?
Anno: As a picture, nothing. But because I know Miyazaki-san personally, I can’t view it objectively. His presence in the film is too conspicuous, it’s no good. In other words… it feels like he’s showing off.
Tomino: How so?
Anno: He has the main character act all self-deprecating, calling himself a pig… but then puts him in a bright red plane, has him smoking all cool-like, even creates a love triangle between a cute young thing and a sexy older lady.
Tomino: Ha! I see what you mean. He and I are around the same age, though. So I get how he feels, unconditionally. So I may think, “Oh boy…” but I can’t stay mad at him (laughs).
1997-utena: “Utena 2011 Boxset Booklet Commentary”, Kunihiko Ikuhara, Yuichirou Oguro, Hiroshi Kaneda, Haruyasu Yamazaki, Tomomi Takemura, Hideki Ito, Yo Yamada, Tomokazu Mii, Yoji Enokido, Shinya Hasegawa, J.A. Caesar, Toshimichi Otsuki, Chiho Saito, Sarah Alys Lindholm, C.A.P. (2013-02-07):
Is the intended race of anime characters distinguishable because of their facial features or are they too ‘international’ to tell?
This study addressed this question empirically by comparing the intended racial categories of static frontal portraits of 341 anime characters randomly selected from anime produced between 1958 and 2005 with the perceptions of 1,046 raters.
Results showed that, although the race of more than half of the anime characters was originally designed to be Asian and only a small fraction were intended to be Caucasian, many were perceived as Caucasian by the largely Caucasian raters. Response patterns also indicated ‘Own Race Projection (ORP)’, i.e. perceivers frequently perceived anime characters to be of their own racial group.
Implications for anime’s international dissemination are discussed. [Keywords: anime, cognitive studies, empirical studies, facial perception, internationalization, Own Race Projection, racial categorization]
2010-sarrazin: “Ero-Anime: Manga Comes Alive”, Stephen Sarrazin (2011-12-23):
Illustration2Vec: a semantic vector representation of illustrations”, Saito Masaki, Yusuke Matsui (2015-11-02):
Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods. [Keywords: illustration, CNNs, visual similarity, search]
2018-zhang.pdf: “Two-stage Sketch Colorization”, Lvmin Zhang, Chengze Li, Tientsin Wong, Yi Ji, Chunping Liu (2018):
Sketch or line art colorization is a research field with substantial market demand. Different from photo colorization which strongly relies on texture information, sketch colorization is more challenging as sketches may not have texture. Even worse, color, texture, and gradient have to be generated from the abstract sketch lines. In this paper, we propose a semi-automatic learning-based framework to colorize sketches with proper color, texture as well as gradient. Our framework consists of two stages. In the first drafting stage, our model guesses color regions and splashes a rich variety of colors over the sketch to obtain a color draft. In the second refinement stage, it detects the unnatural colors and artifacts, and try to fix and refine the result. Comparing to existing approaches, this two-stage design effectively divides the complex colorization task into two simpler and goal-clearer subtasks. This eases the learning and raises the quality of colorization. Our model resolves the artifacts such as water-color blurring, color distortion, and dull textures. We build an interactive software based on our model for evaluation. Users can iteratively edit and refine the colorization. We evaluate our learning model and the interactive system through an extensive user study. Statistics shows that our method outperforms the state-of-art techniques and industrial applications in several aspects including, the visual quality, the ability of user control, user experience, and other metrics.
2019-lee.pdf: “Unpaired Sketch-to-Line Translation via Synthesis of Sketches”, Gayoung Lee, Dohyun Kim, Youngjoon Yoo, Dongyoon Han, Jung-Woo Ha, Jaehyuk Chang (2019-11-17):
Converting hand-drawn sketches into clean line drawings is a crucial step for diverse artistic works such as comics and product designs. Recent data-driven methods using deep learning have shown their great abilities to automatically simplify sketches on raster images. Since it is difficult to collect or generate paired sketch and line images, lack of training data is a main obstacle to use these models. In this paper, we propose a training scheme that requires only unpaired sketch and line images for learning sketch-to-line translation. To do this, we first generate realistic paired sketch and line images from unpaired sketch and line images using rule-based line augmentation and unsupervised texture conversion. Next, with our synthetic paired data, we train a model for sketch-to-line translation using supervised learning. Compared to unsupervised methods that use cycle consistency losses, our model shows better performance at removing noisy strokes. We also show that our model simplifies complicated sketches better than models trained on a limited number of handcrafted paired data.
2019-ye.pdf: “Interactive Anime Sketch Colorization with Style Consistency via a Deep Residual Neural Network”, Ru-Ting Ye, Wei-Li Wang, Ju-Chin Chen, Kawuu W. Lin (2019-11-21):
Anime line sketch colorization is to fill a variety of colors the anime sketch, to make it colorful and diverse. The coloring problem is not a new research direction in the field of deep learning technology. Because of coloring of the anime sketch does not have fixed color and we can’t take texture or shadow as reference, so it is difficult to learn and have a certain standard to determine whether it is correct or not. After generative adversarial networks (GANs) was proposed, some used GANs to do coloring research, achieved some result, but the coloring effect is limited. This study proposes a method use deep residual network, and adding discriminator to network, that expect the color of colored images can consistent with the desired color by the user and can achieve good coloring results.
2020-akita.pdf: “Colorization of Line Drawings with Empty Pupils”, K. Akita, Y. Morimoto, R. Tsuruno (2020-11-24):
Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the convolutional neural network are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically colorizing eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our pupil position estimation network.