SE – SIGCSE 2022 Version

Software Engineering (SE)

As far back as the early 1970s, Dave Parnas allegedly said, “Software engineering is the multi-person construction of multi-version programs.” This is an essential insight: while programming is the skill that governs our ability to write a program, software engineering is distinct in two dimensions. 

 

First, a software engineering project is a team endeavor. Being a solitary programming expert is insufficient. Skilled software engineers will additionally demonstrate expertise in communication and collaboration. Programming may be an individual activity, but software engineering is a collaborative one, deeply tied to issues of professionalism, teamwork, and communication. 

 

Second, a software engineering project is usually “multi-version.” It has an expected lifespan; it needs to function properly for months, years, or decades. Features may be added or removed to meet product requirements. The technological context will change, as our computing platforms evolve, programming languages change, dependencies upgrade, etc. This exposure to matters of time and change is novel when compared to a programming project: it isn’t enough to build a thing that works, instead it must work and stay working. Many of the most challenging topics in tech share “time will lead to change” as a root cause: backward compatibility, version skew, dependency management, schema changes, protocol evolution.

 

Software engineering presents a particularly difficult challenge for academic learning. Given that the major differences between programming and Software engineering are time and teamwork, it is hard to generate lessons that require successful teamwork and that faithfully present the risks of time. Additionally, many topics in software engineering will be more authentic and more relevant if and when our learners experience collaborative and long-term software engineering projects in situ rather than in the classroom. Regardless of whether that happens as an internship, involvement in an open source project, or full-time engineering role, a month of full-time hands-on experience has more available hours than the average software engineering course.

 

Thus, a software engineering curriculum should focus primarily on ideas that are needed by a majority of new-grad hires, and that either are novel for those who are trained primarily as programmers, or that are abstract concepts that may not get explicitly stated/shared on the job. Such topics include, but are not limited to:

  • Testing
  • Teamwork, collaboration
  • Communication
  • Design
  • Maintenance and Evolution
  • Software engineering tools

Some such material can be presented in the abstract and is reasonably suited to a standard lecture or lecture+lab course. Discussing theoretical underpinnings of version control systems, or branching strategies in such systems, can be an effective way to familiarize students with those ideas. Similarly, a theoretical discussion can highlight the difference between static and dynamic analysis tools, or may motivate discussion of diamond dependency problems in dependency networks. 

 

On the other hand, many of the fundamental topics of software engineering are best experienced in a hands-on fashion. Historically, project-oriented courses have been a common vehicle for such learning. We believe that such experience is valuable but also bears some interesting risks: students may form erroneous notions about the difficulty / complexity of collaboration if their only exposure is a single project with teams formed of other novice software engineers. It falls to instructors to decide on the right balance between theoretical material and hands-on projects – neither is a perfect vehicle for this challenging material. We strongly encourage instructors of project courses to aim for iteration and fast feedback – a few simple tasks repeated (i.e. in an Agile-structured project) is better than singular high-friction introductions to many types of tasks. If long-running project courses are not an option, anything that can expose learners to the collaborative and long-term aspects of software engineering is valuable: adding features to an existing codebase, collaborating on distinct parts of a larger whole, pairing up to write an encoder and decoder, etc.

 

All evidence suggests that the role of software in our society will continue to grow for the foreseeable future, and yet the era of “two programmers in a garage” seems to have drawn to a close. Most important software these days is clearly a team effort, building on existing code and leveraging existing functionality. The study of software engineering skills is a deeply important counterpoint to the everyday experience of computing students – we must impress on them the reality that few software projects are managed by writing from scratch as a solo endeavor. Communication, teamwork, planning, testing, and tooling are far more important as our students move on from the classroom and make their mark on the wider world.

 

This document shifts the focus of the Software Engineering knowledge area in a few ways compared to the goals of CS2013:

  • More explicit focus on the software workflow (version control, testing, code review, tooling)
  • Less focus on team leadership and project management.
  • More focus on team participation, communication, and collaboration

SE. Software Engineering (6 Core-Tier1 hours; 21 Core-Tier2 hours)   

  Core-Tier1 hours Core-Tier2 hours Includes Electives
SE/Teamwork 2 2
SE/Tools and Environments 1 3
SE/Product Requirements 2 Yes
SE/Software Design 1 4 Yes
SE/Software Construction  1 3 Yes
SE/Software Verification and Validation 1 3 Yes
SE/Refactoring and Code Evolution 2 Yes
SE/Software Reliability 2 Yes
SE/Formal Methods 0 0 Yes

 

SE/Teamwork

[2 Core-Tier1 hours, 2 Core-Tier2 hours]

Because of the nature of programming learning, most students in introductory SE are expected to have little or no exposure to the collaborative nature of SE. Practice may help, but lecture and discussion time spent on the value of clear and polite communication, collaboration, etc. are essential.  

Topics:

[Core-Tier1]

  • Effective communication
  • Common causes of team conflict, and approaches for conflict resolution
  • Cooperative programming 
    • Pair programming
    • Code review
  • Roles and responsibilities in a software team
    • Advantages and realities of teams
  • Team processes 
    • Responsibilities for tasks, effort estimation, meeting structure, work schedule
  • Importance of team diversity

[Core-Tier2]

  • Interfacing with those outside the team, as a team
    • Management & other non-technical teams
    • Customers
    • Users
  • Risks associated with physical, distributed, hybrid and virtual teams 
    • Including communication, perception, structure, points of failure, mitigation and recovery, etc.

 Learning Outcomes:

[Core-Tier1]

  1. Communicate effectively with other team members. 
  2. Understand the sources of, hazards of, and potential benefits of team conflict. 
  3. Apply a conflict resolution strategy in a team setting. 
  4. Effectively participate in cooperative development/programming. 
  5. Identify and justify necessary roles and responsibilities in a software development team. 
  6. Create and follow an agenda for a team meeting. 
  7. Demonstrate through involvement in a team project the central elements of team building and team management including creating and executing a team work plan. 
  8. Understand the importance of and benefits that diversity brings to a software development team 

 

[Core-Tier2]

  1. Understand the importance of, and strategies to, as a team, interface with stakeholders not on the team on both technical and non-technical levels. 
  2. Describe the risks associated with physical, distributed, hybrid and virtual teams and possible points of failure and how to mitigate against and recover/learn from failures.  

 

SE/Tools and Environments

Industry reliance on SE tools has exploded in the past generation, with version control becoming ubiquitous, testing frameworks growing in popularity, increased reliance on static and dynamic analysis in practice, and near-ubiquitous use of continuous integration systems. Increasingly powerful IDEs provide code searching and indexing capabilities, as well as small scale refactoring tools and integration with other SE tools. An understanding of the nature of these tools is broadly valuable – especially version control systems.

[1 Core-Tier1 hour, 3 Core-Tier2 hours]

Topics:

[Core-Tier1]

  • Software configuration management and version control
    • Configuration in version control, reproducible builds/configuration
    • Version control branching strategies. Development branches vs. release branches. Trunk-based development.

[Core-Tier2]

  • Release management
  • Testing tools including static and dynamic analysis tools
  • Software process automation
    • Build systems – the value of fast, hermetic, reproducible builds, compare/contrast approaches to building a project
    • Continuous Integration (CI) – the use of automation and automated tests to do preliminary validation that the current head/trunk revision builds and passes (basic) tests 
    • Dependency management – updating external/upstream dependencies, package management, SemVer
  • Design and communication tools (docs, diagrams, common forms of design diagrams)
  • Tool integration concepts and mechanisms
  • Use of modern IDE facilities – debugging, refactoring, searching/indexing, etc.

 

Learning Outcomes:

[Core-Tier1]

  1. Describe the difference between centralized and distributed software configuration management.  
  2. Describe how version control can be used to help manage software release management. 
  3. Identify configuration items and use a source code control tool in a small team-based project. 

 

[Core-Tier2]

  1. Describe how available static and dynamic test tools can be integrated into the software development environment.  
  2. Understand the use of CI systems as a ground-truth for the state of the team’s shared code (build and test success). 
  3. Describe the issues that are important in selecting a set of tools for the development of a particular software system, including tools for requirements tracking, design modeling, implementation, build automation, and testing. 
  4. Demonstrate the capability to use software tools in support of the development of a software product of medium size. 

 

SE/Product Requirements

Knowing how to build something is of little help if we do not know what to build. Product Requirements (aka Requirements Engineering, Product Design, Product Requirements solicitation, PRDs, etc) introduces students to the processes surrounding the specification of the broad requirements governing development of a new product or feature.

[2 Core-Tier2 hours]

Topics:

[Core-Tier2]

  • Describing functional requirements using, for example, use cases or user stories
    • Using at least one method of documenting and structuring functional requirements
    • Understanding how the method supports design and implementation
    • Strengths and weaknesses of using a particular approach
  • Properties of requirements including consistency, validity, completeness, and feasibility
  • Requirements elicitation
    • Sources of requirements, for example, users, administrators, or support personnel
    • Methods of requirement gathering, for example, surveys, interviews, or behavioral analysis
  • Non-functional requirements, for example, security, usability, or performance (aka Quality Attributes)
    • Cross reference IAS/Secure Software Engineering
  • Risk identification and management
  • Communicating and/or formalizing requirement specifications

 

[Elective]

  • Prototyping
    • A tool for both eliciting and validating/confirming requirements
  • Product evolution
    • When requirements change, how to understand what effect that has and what changes need to be made
  • Effort estimation
    • Learning techniques for better estimating the effort required to complete a task
    • Practicing estimation and comparing to how long tasks actually take
    • Effort estimation is quite difficult, so students are likely to be way off in many cases, but seeing the process play out with their own work is valuable

 

Illustrative Learning Outcomes:

[Core-Tier2] 

  1. Compare different methods of eliciting requirements along multiple axes. 
  2. Identify differences between two methods of describing functional requirements (e.g. customer interviews, user studies, etc) and the situations where each would be preferred. 
  3. Identify which behaviors are required, allowed, or barred from a given set of requirements and a list of candidate behaviors. 
  4. Collect a set of requirements for a simple software system. 
  5. Identify areas of a software system that need to be changed, given a description of the system and a set of new requirements to be implemented. 
  6. Identify the functional and non-functional requirements in a set of requirements. 

[Elective] 

  1. Create a prototype of a software system to validate a set of requirements. (Building a mock-up, MVP, etc)
  2. Estimate the time to complete a set of tasks, then compare estimates to the actual time taken.
  3. Determine an implementation sequence for a set of tasks, adhering to dependencies between them, with a goal to retire risk as early as possible.
  4. Write a requirement specification for a simple software system.

 

SE/Software Design 

While Product Requirements focuses on the user-facing functionality of a software system, Software Design focuses on the engineer-facing design of internal software components. This encompasses large design concerns such as software architecture, as well as small-scale design choices like API design. 

[1 Core-Tier1 hours; 4 Core-Tier2 hours]

Topics:

[Core-Tier1]

  • System design principles
    • Levels of abstraction (e.g. architectural design and detailed design)
    • Separation of concerns
    • Information hiding
    • Coupling and cohesion
  • Software architecture
    • Design paradigms
      • Top-down functional decomposition / layered design
      • Data-oriented architecture
      • Object-oriented analysis and design
      • Event-driven design
    • Standard architectures (e.g. client-server, n-layer, pipes-and-filters, Model View Controller)
    • Identifying component boundaries and dependencies
  • Programming in the large vs. programming in the small

 

[Core-Tier2]

  • API design principles
    • Consistency
      • Consistent APIs are easier to learn and less error-prone
      • Consistency is both internal (between different portions of the API) and external (following common API patterns)
    • Composability
    • Documenting contracts
      • API operations should describe their effect on the system, but not generally their implementation
      • Preconditions, postconditions, and invariants
    • Expandability
      • Cross reference SE/Refactoring and Code Evolution
    • Error reporting
      • Errors should be clear, predictable, and actionable
      • Input that does not match the contract should produce an error
  • Identifying and codifying data invariants and time invariants
  • Structural and behavioral models of software designs
  • Data design
    • Data structures
    • Storage systems
    • Cross reference Information Management, particularly IM/Data Modeling
  • Requirement traceability
    • Understanding which requirements are satisfied by a design

 

. [Elective]

  • Design modeling, for instance with class diagrams, entity relationship diagrams, or sequence diagrams
  • Measurement and analysis of design quality
  • Principles of secure design and coding (cross reference IAS/Principles of Secure Design)
    • Principle of least privilege
    • Principle of fail-safe defaults
    • Principle of psychological acceptability
  • Evaluating design tradeoffs (e.g. efficiency vs. reliability, security vs. usability)

 

Illustrative Learning Outcomes:

[Core-Tier1]

  1. Identify the standard software architecture of a given high-level design.
  2. Use a particular design paradigm to design a simple software system and explain how system design principles have been applied in this design.
  3. Adapt a flawed system design to better follow principles such as separation of concerns or information hiding.
  4. Identify the dependencies among a set of software components in an architectural design.

[Core-Tier2]

  1. Design an API for a single component of a large software system, including identifying and documenting each operation’s invariants, contract, and error conditions.
  2. Evaluate an API description in terms of consistency, composability, and expandability.
  3. Expand an existing design to include a new piece of functionality.
  4. Design a set of data structures to implement a provided API surface.
  5. Identify which requirements are satisfied by a provided software design.

[Elective]

  1. Translate a natural language software design into class diagrams.
  2. Adapt a flawed system design to better follow the principles of least privilege and fail-safe defaults.
  3. Contrast two software designs across different qualities, such as efficiency or usability.

 

SE/Software Construction 

Software Construction focuses on practices that influence the direct production of software: use of tests, test driven development, coding style. More advanced topics extend into secure coding, dependency injection, work prioritization, etc.

[1 Core-Tier1 hour, 3 Core-Tier2 hours]

Topics:

[Core-Tier1]

  • Practical small-scale testing
    • Unit testing
    • Test-driven development – This is particularly valuable for students psychologically, as it is far easier to engage constructively with the challenge of identifying challenging inputs for a given API (edge cases, corner cases) a priori. If they implement first, the instinct is often to avoid trying to crash their new creation, while a test-first approach gives them the intellectual satisfaction of spotting the problem cases and then watching as more tests pass during the development process.

[Core-Tier2]

  • Coding style
    • Style guides
    • Commenting
    • Naming
  • “Best Practices” for coding: techniques, idioms/patterns, mechanisms for building quality programs (cross reference IAS/Defensive Programming; SDF/Development Methods)
    • Defensive coding practices
    • Secure coding practices and principles
    • Using exception handling mechanisms to make programs more robust, fault-tolerant
  • Debugging
  • Use of libraries and frameworks developed by others

[Elective]

  • Larger-scale testing
    • Test doubles (stubs, mocks, fakes)
    • Dependency injection
  • Work sequencing, including dependency identification, milestones, and risk retirement
    • Dependency identification: Identifying the dependencies between different tasks
    • Milestones: A collection of tasks that serve as a marker of progress when completed.  Ideally, the milestone encompasses a useful unit of functionality.
    • Risk retirement: Identifying what elements of a project are risky and prioritizing completing tasks that address those risks
  • Potential security problems in programs
  • Buffer and other types of overflows
  • Race conditions
  • Improper initialization, including choice of privileges
  • Input validation
  • Documentation (autogenerated)
  • Development context: “green field” vs. existing code base
  • Change impact analysis
  • Change actualization
  • Release management

 

Learning Outcomes:

[Core-Tier1]

  1. Write appropriate unit tests for a small component (several functions, a single type, etc).

 

[Core-Tier2]

  1. Describe techniques, coding idioms and mechanisms for implementing designs to achieve desired properties such as reliability, efficiency, and robustness. 
  2. Build robust code using exception handling mechanisms. 
  3. Describe secure coding and defensive coding practices. 
  4. Select and use a defined coding standard in a small software project. 
  5. Compare and contrast integration strategies including top-down, bottom-up, and sandwich integration. 
  6. Describe the process of analyzing and implementing changes to code base developed for a specific project. 
  7. Describe the process of analyzing and implementing changes to a large existing code base. 

 

. [Elective]

  1. Rewrite a simple program to remove common vulnerabilities, such as buffer overflows, integer overflows and race conditions. 
  2. Write a software component that performs some non-trivial task and is resilient to input and run-time errors. 

 

SE/Software Verification and Validation 

Software Verification and Validation focuses on how to improve the value of testing – understand the role of testing, failure modes, and differences between good tests and poor ones. 

[1 Core-Tier1 hour; 3 Core-Tier2 hours]

Topics:

[Core-Tier1]

  • Verification and validation concepts
    • Verification: Are we building the thing right?
    • Validation: Did we build the right thing? 
  • Why testing matters
    • Does the component remain functional as the code evolves?
  • Testing objectives
    • Usability
    • Reliability
    • Conformance to specification
    • Performance
    • Security (cross reference IAS/Secure Software Engineering)
  • Test kinds
    • Unit
    • Integration
    • Validation
    • System

 

 [Core-Tier2]

  • Test planning and generation
    • Test case generation, from formal models, specifications, etc
    • Test coverage
      • Test matrices
      • Code coverage (how much of the code is tested)
      • Environment coverage (how many hardware architectures, OSes, browsers, etc are tested)
    • Test data and inputs
  • Test development (cross reference SDF/Development Methods)
    • Test-driven development (cross reference SE/Software Construction)
    • Object oriented testing, mocking, and dependency injection
    • Black-box and white-box testing techniques
    • Test tooling, including code coverage, static analysis, and fuzzing
  • Verification and validation in the development cycle
    • Code reviews (cross reference SE/Software Construction)
    • Test automation, including automation of tooling
    • Pre-commit and post-commit testing
    • Trade-offs between test coverage and throughput/latency of testing
    • Defect tracking and prioritization
      • Reproducibility of reported defects
  • Domain specific verification and validation challenges
    • Performance testing and benchmarking
    • Asynchrony, parallelism, and concurrency
    • Safety-critical
    • Numeric

 

[Elective]

  • Verification and validation tooling and automation
    • Static analysis
    • Code coverage
    • Fuzzing
    • Dynamic analysis and fault containment (sanitizers, etc)
    • Fault logging and fault tracking
  • Test planning and generation
    • Fault estimation and testing termination including defect seeding
    • Use of random and pseudo random numbers in testing
  • Performance testing and benchmarking
    • Throughput and latency
    • Speedup and scaling
    • Identifying and measuring figures of merits
    • Common performance bottlenecks
      • Compute-bound
      • Memory-bandwidth bound
      • Latency-bound
    • Statistical methods and best practices for benchmarking
      • Estimation of uncertainty
      • Confidence intervals
    • Analysis and presentation (graphs, etc)
    • Timing techniques
  • Testing asynchronous, parallel, and concurrent systems
  • Verification and validation of non-code artifacts (documentation, training materials)

 

Learning Outcomes:

[Core-Tier1]

  1. Explain why testing is important. 
  2. Distinguish between program validation and verification. 
  3. Describe different objectives of testing. 
  4. Describe and distinguish among the different types and levels of testing (regression, unit, integration, systems, and acceptance). 

[Core-Tier2]

  1. Describe techniques for creating a test plan and generating test cases. 
  2. Create a test plan for a medium-size code segment which includes a test matrix and generation of test data and inputs. 
  3. Implement a test plan for a medium-size code segment. 
  4. Identify the fundamental principles of test-driven development methods and explain the role of automated testing in these methods. 
  5. Discuss issues involving the testing of object-oriented software. 
  6. Describe mocking and dependency injection and their application. 
  7. Undertake, as part of a team activity, a code review of a medium-size code segment. 
  8. Describe the role that tools can play in the validation of software. 
  9. Automate testing in a small software project. 
  10. Explain the roles, pros, and cons of pre-commit and post-commit testing. 
  11. Discuss the tradeoffs between test coverage and test throughput/latency and how this can impact verification. 
  12. Use a defect tracking tool to manage software defects in a small software project. 
  13. Discuss the limitations of testing in certain domains. 

 

[Elective]

  1. Describe and compare different tools for verification and validation. 
  2. Automate the use of different tools in a small software project. 
  3. Explain how and when random numbers should be used in testing. 
  4. Describe approaches for fault estimation. 
  5. Estimate the number of faults in a small software application based on fault density and fault seeding. 
  6. Describe throughput and latency and provide examples of each. 
  7. Explain speedup and the different forms of scaling and how they are computed. 
  8. Describe common performance bottlenecks. 
  9. Describe statistical methods and best practices for benchmarking software. 
  10. Explain techniques for and challenges with measuring time when constructing a benchmark. 
  11. Identify the figures of merit, construct and run a benchmark, and statistically analyze and visualize the results for a small software project. 
  12. Describe techniques and issues with testing asynchronous, concurrent, and parallel software. 
  13. Create a test plan for a medium-size code segment which contains asynchronous, concurrent, and/or parallel code, including a test matrix and generation of test data and inputs. 
  14. Describe techniques for the verification and validation of non-code artifacts. 

 

SE/Refactoring and Code Evolution

[2 Core-Tier2 hour]

Topics: 

[Core-Tier2]

  • Hyrum’s Law / The Law of Implicit Interfaces
  • Backward compatibility
    • Compatibility is not a property of a single entity, it’s a property of a relationship.
    • Backward compatibility needs to be evaluated in terms of provider + consumer(s) or with a well-specified model of what forms of compatibility a provider aspires to / promises.
  • Refactoring
    • Standard refactoring patterns (rename, inline, outline, etc)
    • Use of refactoring tools in IDE
    • Application of static-analysis tools (to identify code in need of refactoring, generate changes, etc)
  • Versioning
    • Semantic Versioning (SemVer)
    • Trunk-based development 

[Elective]

  • “Large Scale” Refactoring – techniques when a refactoring change is too large to commit safely, or when it is impossible to synchronize change between provider + all consumers.
    • Express both old and new APIs so that they can co-exist
    • Minimize the size of behavior changes
    • Why these techniques are required, (e.g. “API consumers I can see” vs “consumers I can’t see”)

 

Illustrative Learning Outcomes:

  1. Identify both explicit and implicit behavior of an interface, and identify potential risks from Hyrum’s Law
  2. Consider inputs from static analysis tools and/or Software Design principles to identify code in need of refactoring.
  3. Identify changes that can be broadly considered “backward compatible,” potentially with explicit statements about what usage is or is not supported
  4. Refactor the implementation of an interface to improve design, clarity, etc. with minimal/zero impact on existing users
  5. Evaluate whether a proposed change is sufficiently safe given the versioning methodology in use for a given project
  6. [Elective] Plan a complex multi-step refactoring to change default behavior of an API safely.

 

SE/Software Reliability 

[2 Core-Tier2 hours]

Topics:

[Core-Tier2]

  • Concept of reliability as probability of failure or mean time between failures, and faults as cause of failures
  • Identifying reliability requirements for different kinds of software (cross-reference SEP) 
  • ​​Software failures caused by defects/bugs, and so for high reliability goal is to have minimum defects – by injecting fewer defects (better training, education, planning), and by removing most of the injected defects (testing, code review, etc)
  • Software reliability, system reliability and failure behavior (cross-reference SF/Reliability Through Redundancy)
  • Defect injection and removal cycle , and different approaches for defect removal
  • Compare the “error budget” approach to reliability with the “error-free” approach, and identify domains where each is relevant

 

[Elective]

  • Software reliability models
  • Software fault tolerance techniques and models
    • Contextual differences in fault tolerance (e.g. crashing a flight critical system is strongly avoided, crashing a data processing system before corrupt data is written to storage is highly valuable)
  • Software reliability engineering practices – including reviews, testing, practical model checking
  • Identification of dependent and independent failure domains, and their impact on system reliability
  • Measurement-based analysis of software reliability – telemetry, monitoring and alerting, dashboards, release qualification metrics, etc.

 

Learning Outcomes:

[Core-Tier2]

  1. Describe how to determine the level of reliability required by a software system. 
  2. Explain the problems that exist in achieving very high levels of reliability. 
  3. Understand  approaches to minimizing faults that can be applied at each stage of the software lifecycle. 

 

[Elective]

  1. Demonstrate the ability to apply multiple methods to develop reliability estimates for a software system. 
  2. Identify methods that will lead to the realization of a software architecture that achieves a specified level of reliability. 
  3. Identify ways to apply redundancy to achieve fault tolerance. 
  4. Identify single-point-of-failure (SPF) dependencies in a system design.

 

SE/Formal Methods 

[Elective]

The topics listed below have a strong dependency on core material from the Discrete Structures area, particularly knowledge units DS/Basic Logic and DS/Proof Techniques.

Topics:

  • Formal specification of interfaces
    • Specification of pre- and post- conditions
    • Formal languages for writing and analyzing pre- and post-conditions.
  • Problem areas well served by formal methods
    • Lock-free programming, data races
    • Asynchronous and distributed systems, deadlock, livelock, etc.
  • Comparison to other tools and techniques for defect detection
    • Testing
    • Fuzzing
  • Formal approaches to software modeling and analysis
    • Model checkers
    • Model finders

 

Illustrative Learning Outcomes:

  1. Describe the role formal specification and analysis techniques can play in the development of complex software and compare their use as validation and verification techniques with testing.
  2. Apply formal specification and analysis techniques to software designs and programs with low complexity. 
  3. Explain the potential benefits and drawbacks of using formal specification languages. 

 

Desirable Professional Dispositions 

The most desirable professional dispositions for this knowledge area are:

    • Collaborative: Software engineering is increasingly described as a “team sport” – successful software engineers are able to work with others effectively. Humility, respect, and trust underpin the collaborative relationships that are essential to success in this field.
    • Professional: Software engineering produces technology that has the chance to influence literally billions of people. Awareness of our role in society, strong ethical behavior, and commitment to respectful day-to-day behavior are essential.
    • Communicative: No single software engineer on a project is likely to know all of the project details. Successful software projects depend on engineers communicating clearly and regularly in order to coordinate effectively.
  • Meticulous: Software engineering requires attention to detail and consistent behavior from everyone on the team. Success in this field is clearly influenced by a meticulous approach – thorough understanding, proper procedures, and a solid avoidance of cutting corners.

 

Necessary and Desirable Math

Necessary: None

Desirable: Introductory statistics (performance comparisons, evaluating experiments, interpreting survey results, etc) 

 

Shared and Crosscutting Concepts 

This knowledge area shares the following concepts with other knowledge areas:

  • Professionalism / teamwork with Society, Ethics, and Professionalism (TODO)
  • Data Modeling with Information Management
  • Secure Programming with Information Security

Crosscutting concepts that apply to this knowledge area include:

  • Ethics
  • Programming

 

Subcommittee

Chair: Titus Winters (Google, New York City, NY, USA)

Subcommittee members:

  • Brett A. Becker (University College Dublin, Ireland)
  • Adam Vartanian (Cord, London, UK)
  • Bryce Adelstein Lelbach (NVIDIA, New York City, NY, USA)
  • Patrick Servello (CIWRO, Norman, OK, USA)
  • Pankaj Jalote (IIIT-Delhi, Delhi, India)
  • Christian Servin (El Paso Community College, TX, USA)

Other contributors: 

  • Hyrum Wright (Google, Pittsburgh, PA, USA)
  • Olivier Giroux (Apple, Cupertino, CA, USA)