A hallmark of systems engineering, distinguishing it from less rigorous systems creation activities and essential to success in developing large-scale and complex systems and managing them throughout their life-cycles, is the rigorous use of requirement specifications, requirements-centric design, multi-stage testing and revision, and other risk-management and quality assurance techniques. Various sub-domains within systems engineering apply these risk- and complexity- management techniques to systems overall, to system components, to component interfaces, and engineering, interface, and other processes. Quality at any of these levels is defined in terms of the degree to which of the system, component, process, etc., meets the specified requirements. Analysis and specification of requirements and functions at each of these levels, along with identification and application of relevant quality measures, is an essential part of good systems engineering. It is especially notable, therefore, that projects involving ontologies as part of engineered systems, or ontologies as part of systems engineering processes, tend to have little or no ontology quality assurance measures. Notably, even in systems engineering projects in which rigorous attention is spent on identification and specification of other components and aspects of systems, identification and specification of ontology requirements is given little to no attention. Reasons for this exception to otherwise rigorous methodology vary, but can include a belief that ontologies are non-technical artifacts, not subject to engineering methodologies; a lack of necessary resources; an absence of concerns for related areas of accountability; a belief that variations in ontology do not affect end functionality or performance; or a belief that, however desirable quality assurance measures for ontologies might be, no such implementable, usable, reliable measures exist. [1] [ajv: much of the text below will be replaced / interspersed with survey results] In order to know how we -- the combined communities of systems engineers and ontologists -- ought to respond to this lack, we need some sense of the consequences. It might be expected, by someone both experienced with large-scale ontology applications and familiar with ontologies on a deep technical level, that this lack of quality assurance measures would be a significant obstacle to project success. Is this the case? Or are those who believe that no ontology needs no quality assurance correct? If there is no detrimental impact, then perhaps the right response is to reassess whatever beliefs lead to the expectation of harm. If there are significant and negative consequences, however, then additional questions follow Some of the varied reasons for the lack of of ontology quality assurance measures are given above. What are their sources, and how can they be addressed? To what extent do the beliefs above arise from lack of ontological experience (lack of personnel specifically trained and experienced in ontology engineering; scarcity on existing staff of personnel with even secondary experience with ontologies and ontology applications)? From business practices such as vendor-driven ontology selection (e.g., use of ontologies provided by vendors who provide other components or aspects of the system)? From a general perception that no useful measures of ontology quality, or methods for assuring it, exists? Most importantly, what is the state of ontology quality assessment and assurance? What measures and methods do exist, and how can they be applied? What about those projects in which some ontology quality assurance measures are implemented? What lessons learned from these projects can be of use to others? What research has been done, or is currently being done, in this area? [a] [b] [c] [d] [e] [f] ________________ [1] During the initial weeks of the Summit, the Ontology Quality for Big Systems Co-Champions closely attended to, and asked questions about, ontology quality experiences in big systems engineering projects. Based on prior experience, there was some expectation of reports regarding difficulties with ontology quality, and ontology quality assurance, and of resulting problems for projects. Such reports were indeed forthcoming. However, they were fewer than the reports of projects conducted without any sense of the quality of ontologies used, nor indeed any idea of how to get such information. Following up on this revelation, the Co-Champions developed a survey to elicit more detailed information about experiences related to ontology quality and big systems projects, without relying on the respondant, or indeed the project team, having thought about ontology quality explicitly, or having knowledge or agreement of factors contributing to such quality. The survey was designed to be as neutral as possible to varying theories about ontology quality, and to elicit enough potentially-relevant information to let patterns emerge. The results of this survey are reflected in this text. Details of these results can be found at . [a]Mike Bennett: It seems to me that the track summary is focused on the lack of practice of quality assuance on ontologies. This seems a bit damning (but is no surprise). Do we want to focus on this or should we also try to summarize the kind of work that does exist which would enable at least some QA measures? Let me think about this some more. [b]Mike Bennett: So, thinking about what we have done in this track we have: 1. Done some background research into who has done what on ontology quality measures (and taken a long hard look at the scope of these efforts); 2. Run a session with some speakers picked from this bunch; 3. Harvested chat forums on the other sessions for comments which may be of relevance in ontology quality 4. Tried to get a few conversations going on the Ontology Summit mailing list on ontology quality related matters, and 5. Done (will have done) a survey into ontology quality experiences I guess the summary draft needs to make a nod to these activities and the information we have gleaned out of these (including the finding that not a lot of people are doing ontology quality). My brain is stuck on this chapter thing I'm doing, do you think they will let us have another crack at this tomorrow before we have to hand it in? [c]Mike Bennett: Yes, I tjhink the survey structure provide a good set of hooks on which to hang the things we can say in this track summary. [d]Mike Bennett: In other words, does it sound as if we are being a bit harsh? [e]Amanda Vizedom: I remember some discussions in past about stepping away from the discussion of what was done toward dicussion of results, so was trying to avoid that (seemed to be reflected in other submissions, too). But I think some of this general discission I wrote can go off to the teack wiki pages and be replaced by outline stubbing inexpected result types from survey. With luck, that'll get me back and focused on finishing the damn thing, too. I'll msg Todd and Ali in the morning, copying you, explaining, and sharing working google doc with them in the meantime. [f]Amanda Vizedom: Hmmm, you may be right. I want to make sure to convey why the issue is important, but since the survey hasn't gone out yet, nevermind had responses, I couldn't base it on that. Probably this general argument doesn't belong here. Probably, on reflection, each paragraph could be a summary of one aspect of the survey, with the footnote leading to details. So then, first 2 para would be more neutral, limited to what does show up in survey as actual range of practice?