Comments on naive geography, part 2
For Dan Montello's Cognitive Issues in GIScience class here at UCSB, I was asked to write two short essays, "for and against" the use of naive geography in geospatial software design. Normally, such essays would disappear into an archive on my hard drive, but since virtual globes are referenced a few times, I thought I'd post them.
Comments Against Naive Geography
by Alan Glennon
Egenhofer and Mark (1995) assert that incorporating naive or lay conceptions of geography into GIS and cartographic product design can make spatial information and decision support more accessible to a larger user community. While there is validity to the concept of creating understandable GIS and cartographic products, Egenhofer and Mark do not discuss the perils associated with their design philosophy. At least three prominent risks can be associated with integrating naive geography elements into software and cartographic products, including reductive bias, functionality dissonance, and lessened user control. Such issues make the concept of naive geography inappropriate for use in GIS design.
Reductive bias
Reductive bias describes people’s affinity to construct overly simplistic understandings and categories (Feltovich et al. 1989). Egenhofer and Mark (1995) recognize that people’s lay knowledge “may be contrary to objective observations in the real, physical world.” Many of the naive geographic elements that possess inherent error result from reductive bias. For instance, Egenhofer and Mark provide the example that people, in a “common simplification” of geographic space, generally disregard the curvature of the Earth. They also assert that people generally perceive the world as two-dimensional without verticality, thus leading to overestimating of steepness of slopes and depths of canyons compared to their widths. It is perplexing therefore that Egenhofer and Mark assert that “there is a need to incorporate naive geographic knowledge and reasoning into GISs,” when they acknowledge that such knowledge “may actually contain ‘errors’” and “occasionally be inconsistent.” While they undoubtedly intend for such misconceptions eventually to be remedied, facilitating and acquiescing to erroneous conceptual models would serve only to confuse and perpetuate further misconceptions. Once created, reductive-influenced conceptual models are difficult to correct and overcome (Feltovich et al. 2001). Further, when confronted with evidence contrary to expectations, such as when a GIS yields an unexpected answer to a query, people often rationalize their beliefs without fundamentally altering them (Feltovich et al. 2004). With respect to the naive geography elements that possess reductive bias, Egenhofer and Mark’s design philosophy involves large risks that are counterproductive to geographic understanding and should not be included in a GIS.
Functionality dissonance
Egenhofer and Mark offer naive geography as “the basis for the design of intelligent GISs that will act and respond as a person would” (Egenhofer and Mark 1995). Developing such intelligent GIS technology risks creating unrealistic expectations and misunderstandings about the abilities of computer software (Swartz 2003). Both of these problems stem from functionality dissonance—a phrase modified from the computer science and artificial intelligence issue anthropomorphic dissonance. Functionality dissonance refers to the gap between user expectations and actual software abilities (Watt 1998). As software appears to become more intelligent, users create increased expectations about functionality (Swartz 2003). As the gap widens, so does disappointment, frustration, and dissatisfaction (Swartz 2003). Further, Schneiderman (1998) argues that behaviors that attribute autonomy to a computer “can deceive, confuse, and mislead users.” Similar to concerns in the artificial intelligence literature, naive geography may cause people to develop “an erroneous model of how computers work and what their capabilities are” (Schneiderman 1998). Since GISs exists as a tight coupling of spatial data, analysis, and visualization technology, such intelligent software may create incorrect conceptual models of each of these components. The recent advent of the spatial software called virtual globes provides a relevant example. Virtual globes software allows users to interact with and query overhead imagery and spatial data on a three-dimensional representation of Earth (Butler 2006). The sophisticated technology has given rise to misconceptions about the software’s data, analysis, and output capabilities. For instance, virtual globe imagery sometimes is manipulated without explanation by the software for cartographic reasons (Figure 1). Since the imagery looks “real,” unexpected results may be difficult to mentally reconcile. These misconceptions are exacerbated by the ambiguity of the realities of the software’s capabilities. For instance, the software generally does not offer imagery in real time, but has the capability to do so. Users might also expect the software to possess sophisticated GIS functionality, but only basic tools are offered. In fact, algorithms for performing GIS operations on spheres and ellipsoids largely do not exist (Goodchild 2005). While such failings are research opportunities for academics, lay users may formulate misconceptions of geographic space based on interpretation of stylishly represented, but erroneous spatial data.
Lessened user control
Egenhofer and Mark (1995) offer that “naive geography is also the basis of the design of intelligent GISs that will act and respond as a person would…” As such, the naive geography design philosophy faces similar challenges to those posed by user interface agents—a computer science term for intelligent software assistance. User interface agents are programs that help users to “to achieve the best outcome or, when there is uncertainty, the best expected outcome” (Russell and Norvig 2003, p.4). A prominent example of user agent technology, Clippy the Paperclip, highlights the problems of designing software to predict user needs. Introduced for Microsoft Office in 1997, the cartoon paperclip would appear and open a window with contextually sensitive help based on user actions (Swartz 2003). For instance, in Microsoft Word, if a user began typing a salutation, like “Dear Chris,” the cartoon Clippy would appear and offer assistance in formatting a letter. The problems of such intelligent software assistance include making users feel out of control and lowering self-reliance (Swartz 2003). Quintanar et al. (1982) found that with such help, students felt less responsible for their performance. Swartz (2003) also describes the case that many expert users experienced irritation as a result of the unsolicited user agent assistance; in fact, due to widespread user dissatisfaction with the agent software, Microsoft disabled Clippy in subsequent versions of their Microsoft Office software.
By having the computer lead, users become less involved in the logic and flow of the software’s problem solving. Unlike word processing, working with GIS often includes exploratory problem solving, non-task specific use, and experimentation. Besides the potential for innovation, these unconventional operations shape user concepts of geographic space. Since common sense pragmatism usually conflicts with freeform imagination, the naive geography design philosophy would hinder creative GIS uses.
Counter to Egenhofer and Mark’s (1995) goal of “empowering people” through having “GIS act and respond as a person would,” naive geography design risks encouraging erroneous concepts of geographic space, frustrating and alienating users, and lessening user control of GIS software.
References
Butler, D. 2006. Virtual globes: the web-wide world. Nature, 439(7078): 776-778.
Egenhofer, M. and D. Mark. 1995. Naive geography. In Spatial Information Theory: a theoretical basis for GIS, volume 988 of Lecture Notes in Computer Science, ed. A. Frank and W. Kuhn, 1-16. Berlin: Springer-Verlag.
Feltovich, P., R. Coulson, and R. Spiro. 2001. Learners (Mis)understanding of important and difficult concepts: a challenge for smart machines in education. In Smart Machines in Medicine: The Coming Revolution in Educational Technology, ed. K. Forbus and P. Feltovich, 349-376. Menlo Park, California: AAAI Press.
Feltovich, P., R. Hoffman, D. Woods, and A. Roesler. 2004. Keeping it too simple: how the reductive tendency affects cognitive engineering. IEEE Intelligent Systems, May/June 2004: 90-94.
Feltovich, P., R. Spiro, and R. Coulson. 1989. The nature of conceptual understanding in biomedicine: the deep structure of complex ideas and the development of misconceptions. In Cognitive Science in Medicine, ed. D. Evans and V.L. Patel, 113-172. Cambridge, MA: MIT Press.
Goodchild, M. 2005. What does Google Earth mean for the spatial sciences? GIS Ireland Conference proceedings. October 13, 2005. Dublin, Ireland
Quintanar, L.R., C.R.Crowell, and J.B. Pryor. 1982. Human-computer interaction: A preliminary social psychological analysis. Behavior Research Models and Instrumentation 14(2): 210-220.
Russell, S. and P. Norvig. 2003. Artificial Intelligence: a modern approach. Upper Saddle River, NJ: Pearson Education. pp. 1132.
Scheiderman, B. 1998. Designing the user interface: strategies for effective human-computer interaction. Third edition. Reading, MA: Addison-Wesley Longman.
Swartz, L. 2003. Why people hate the paperclip: labels, appearance, behavior and social responses to user interface agents. BS Honors thesis, Stanford University.
Watt, S. 1998. Psychological agents and the new web media. In The Knowledge Web: Learning and Collaborating on the Net, ed. M. Eisenstadt and T. Vincent. London: Kogan Page.
Comments Against Naive Geography
by Alan Glennon
Egenhofer and Mark (1995) assert that incorporating naive or lay conceptions of geography into GIS and cartographic product design can make spatial information and decision support more accessible to a larger user community. While there is validity to the concept of creating understandable GIS and cartographic products, Egenhofer and Mark do not discuss the perils associated with their design philosophy. At least three prominent risks can be associated with integrating naive geography elements into software and cartographic products, including reductive bias, functionality dissonance, and lessened user control. Such issues make the concept of naive geography inappropriate for use in GIS design.
Reductive bias
Reductive bias describes people’s affinity to construct overly simplistic understandings and categories (Feltovich et al. 1989). Egenhofer and Mark (1995) recognize that people’s lay knowledge “may be contrary to objective observations in the real, physical world.” Many of the naive geographic elements that possess inherent error result from reductive bias. For instance, Egenhofer and Mark provide the example that people, in a “common simplification” of geographic space, generally disregard the curvature of the Earth. They also assert that people generally perceive the world as two-dimensional without verticality, thus leading to overestimating of steepness of slopes and depths of canyons compared to their widths. It is perplexing therefore that Egenhofer and Mark assert that “there is a need to incorporate naive geographic knowledge and reasoning into GISs,” when they acknowledge that such knowledge “may actually contain ‘errors’” and “occasionally be inconsistent.” While they undoubtedly intend for such misconceptions eventually to be remedied, facilitating and acquiescing to erroneous conceptual models would serve only to confuse and perpetuate further misconceptions. Once created, reductive-influenced conceptual models are difficult to correct and overcome (Feltovich et al. 2001). Further, when confronted with evidence contrary to expectations, such as when a GIS yields an unexpected answer to a query, people often rationalize their beliefs without fundamentally altering them (Feltovich et al. 2004). With respect to the naive geography elements that possess reductive bias, Egenhofer and Mark’s design philosophy involves large risks that are counterproductive to geographic understanding and should not be included in a GIS.
Functionality dissonance
Egenhofer and Mark offer naive geography as “the basis for the design of intelligent GISs that will act and respond as a person would” (Egenhofer and Mark 1995). Developing such intelligent GIS technology risks creating unrealistic expectations and misunderstandings about the abilities of computer software (Swartz 2003). Both of these problems stem from functionality dissonance—a phrase modified from the computer science and artificial intelligence issue anthropomorphic dissonance. Functionality dissonance refers to the gap between user expectations and actual software abilities (Watt 1998). As software appears to become more intelligent, users create increased expectations about functionality (Swartz 2003). As the gap widens, so does disappointment, frustration, and dissatisfaction (Swartz 2003). Further, Schneiderman (1998) argues that behaviors that attribute autonomy to a computer “can deceive, confuse, and mislead users.” Similar to concerns in the artificial intelligence literature, naive geography may cause people to develop “an erroneous model of how computers work and what their capabilities are” (Schneiderman 1998). Since GISs exists as a tight coupling of spatial data, analysis, and visualization technology, such intelligent software may create incorrect conceptual models of each of these components. The recent advent of the spatial software called virtual globes provides a relevant example. Virtual globes software allows users to interact with and query overhead imagery and spatial data on a three-dimensional representation of Earth (Butler 2006). The sophisticated technology has given rise to misconceptions about the software’s data, analysis, and output capabilities. For instance, virtual globe imagery sometimes is manipulated without explanation by the software for cartographic reasons (Figure 1). Since the imagery looks “real,” unexpected results may be difficult to mentally reconcile. These misconceptions are exacerbated by the ambiguity of the realities of the software’s capabilities. For instance, the software generally does not offer imagery in real time, but has the capability to do so. Users might also expect the software to possess sophisticated GIS functionality, but only basic tools are offered. In fact, algorithms for performing GIS operations on spheres and ellipsoids largely do not exist (Goodchild 2005). While such failings are research opportunities for academics, lay users may formulate misconceptions of geographic space based on interpretation of stylishly represented, but erroneous spatial data.
Lessened user control
Egenhofer and Mark (1995) offer that “naive geography is also the basis of the design of intelligent GISs that will act and respond as a person would…” As such, the naive geography design philosophy faces similar challenges to those posed by user interface agents—a computer science term for intelligent software assistance. User interface agents are programs that help users to “to achieve the best outcome or, when there is uncertainty, the best expected outcome” (Russell and Norvig 2003, p.4). A prominent example of user agent technology, Clippy the Paperclip, highlights the problems of designing software to predict user needs. Introduced for Microsoft Office in 1997, the cartoon paperclip would appear and open a window with contextually sensitive help based on user actions (Swartz 2003). For instance, in Microsoft Word, if a user began typing a salutation, like “Dear Chris,” the cartoon Clippy would appear and offer assistance in formatting a letter. The problems of such intelligent software assistance include making users feel out of control and lowering self-reliance (Swartz 2003). Quintanar et al. (1982) found that with such help, students felt less responsible for their performance. Swartz (2003) also describes the case that many expert users experienced irritation as a result of the unsolicited user agent assistance; in fact, due to widespread user dissatisfaction with the agent software, Microsoft disabled Clippy in subsequent versions of their Microsoft Office software.
By having the computer lead, users become less involved in the logic and flow of the software’s problem solving. Unlike word processing, working with GIS often includes exploratory problem solving, non-task specific use, and experimentation. Besides the potential for innovation, these unconventional operations shape user concepts of geographic space. Since common sense pragmatism usually conflicts with freeform imagination, the naive geography design philosophy would hinder creative GIS uses.
Counter to Egenhofer and Mark’s (1995) goal of “empowering people” through having “GIS act and respond as a person would,” naive geography design risks encouraging erroneous concepts of geographic space, frustrating and alienating users, and lessening user control of GIS software.
References
Butler, D. 2006. Virtual globes: the web-wide world. Nature, 439(7078): 776-778.
Egenhofer, M. and D. Mark. 1995. Naive geography. In Spatial Information Theory: a theoretical basis for GIS, volume 988 of Lecture Notes in Computer Science, ed. A. Frank and W. Kuhn, 1-16. Berlin: Springer-Verlag.
Feltovich, P., R. Coulson, and R. Spiro. 2001. Learners (Mis)understanding of important and difficult concepts: a challenge for smart machines in education. In Smart Machines in Medicine: The Coming Revolution in Educational Technology, ed. K. Forbus and P. Feltovich, 349-376. Menlo Park, California: AAAI Press.
Feltovich, P., R. Hoffman, D. Woods, and A. Roesler. 2004. Keeping it too simple: how the reductive tendency affects cognitive engineering. IEEE Intelligent Systems, May/June 2004: 90-94.
Feltovich, P., R. Spiro, and R. Coulson. 1989. The nature of conceptual understanding in biomedicine: the deep structure of complex ideas and the development of misconceptions. In Cognitive Science in Medicine, ed. D. Evans and V.L. Patel, 113-172. Cambridge, MA: MIT Press.
Goodchild, M. 2005. What does Google Earth mean for the spatial sciences? GIS Ireland Conference proceedings. October 13, 2005. Dublin, Ireland
Quintanar, L.R., C.R.Crowell, and J.B. Pryor. 1982. Human-computer interaction: A preliminary social psychological analysis. Behavior Research Models and Instrumentation 14(2): 210-220.
Russell, S. and P. Norvig. 2003. Artificial Intelligence: a modern approach. Upper Saddle River, NJ: Pearson Education. pp. 1132.
Scheiderman, B. 1998. Designing the user interface: strategies for effective human-computer interaction. Third edition. Reading, MA: Addison-Wesley Longman.
Swartz, L. 2003. Why people hate the paperclip: labels, appearance, behavior and social responses to user interface agents. BS Honors thesis, Stanford University.
Watt, S. 1998. Psychological agents and the new web media. In The Knowledge Web: Learning and Collaborating on the Net, ed. M. Eisenstadt and T. Vincent. London: Kogan Page.