QAI Conferences Home | QAI Home Page | Federated Chapters
QuEST Chicago
 
TesTrek 2009 Home
General Information
TesTrek at a Glance
Register
Visit the New TesTrek
 


TesTrek Workshops

*You can jump ahead to any selection by clicking the DATE or the session TITLE


WEDNESDAY, OCTOBER 28 - MORNING

Goal-Question-Metric: A Path to Meaningful Metrics
Shaun Bradshaw, Questcon Technologies

Track 1: 11:00- 12:30

QA and test metrics are always popular topics at conferences. Presenters spend time describing the usefulness of a single metric, the need to track a dozen or more standard metrics, or simply the importance of tracking metrics at all. This workshop is a little different. You will not discuss the importance of metrics, which are best to track, or the vital nuances of specific metrics. Rather, Shaun assumes that every organization has their own challenges, goals, and strategies, and therefore needs specific metrics to help overcome their challenges, meet their goals, and guide their strategies. As a result, you will be introduced to the Goal-Question-Metric (GQM) method of identifying and developing the key metrics that are important to your individual organization. Furthermore, you will receive first-hand experience in facilitating a GQM session making it easier to implement this powerful technique in your own organizations.

  • What is GQM and why it is a powerful metrics development technique
  • How to facilitate a GQM session
  • How to develop / identify key metrics that are important to your organization.

About the Workshop Leader...
As Director of Quality Solutions at Questcon Technologies, Shaun Bradshaw is responsible for managing Questcon's team of Senior Practice Managers in the areas of quality solutions development and service delivery. In this role, he works with a variety of clients to improve their QA and test processes by advising, teaching, and mentoring them on the use of effective testing and test management techniques. He is the co-author and editor of Questcon's QuestAssured® suite of methodologies including the Test Methodology, Test Management Methodology, and Test Metrics Methodology. Shaun is also a popular speaker at many of the major industry conferences.

 Back to top

Introducing Application Lifecycle Management: A Tester's Perspective
Adam Gallant, Microsoft Canada

Track 2: 11:00- 12:30

Managing complexity, aligning IT with the business, and enabling agility are top priorities for CIOs who are under pressure to do more for the business with fixed or diminishing budgets. To help achieve this, a common life-cycle management solution is needed to help you track, balance, and communicate the systems that are being created to effectively run your business. Application Life-cycle Management (ALM) provides such a solution by addressing the overall alignment and synchronization of business goals and IT investment priorities. A key characteristic of ALM is that all the project stakeholders, both business and IT, share the same pool of up-to-date information. ALM tools link together all the artifacts that result from the typical activities of an application development project. In this workshop, Adam will use Microsoft Visual Studio Team System to demonstrate how an effective ALM strategy relies on automation, integration, and a coordinated approach to optimize software quality and the development process.

  • Understand the characteristics of an effective ALM strategy.
  • Learn to manage and analyze application traceability.
  • See how to achieve accurate progress tracking and communication.

About the Workshop Leader...
Adam Gallant is Microsoft Technology Solutions Professional for Developer Tools, who focuses on helping Microsoft's customers and partners understand how to implement and drive value from Visual Studio Team System in their organizations. Adam has been dedicated to the technical and practical implementation of Microsoft .NET technology throughout his time at Microsoft. In addition to his extensive hands-on experience, he has acquired several Microsoft certifications including MCSE, MCDBA and MCSD. Adam is an experienced speaker and frequently presents Microsoft Developer Network (MSDN) events across Canada.

 Back to top

Achieving Predictability in Testing Program Management
Srinivas Polisetty, Infosys Technologies, Ltd

Track 3: 11:00- 12:30

It is often observed that predictability of software testing holds the key to successful implementation of large, complex, IT enabled business transformation programs. Due to the ‘Bull-whip’ effect of the variances in the software development life cycle stages, the problem of predictability becomes most complex toward the end, the testing stage. If testing programs are not well managed for predictability, it can not only delay the business benefits, but also have other adverse impacts such as low acceptance from the user community, less than expected value delivery, and greater overall total cost of ownership. This workshop takes a structured approach, breaking down predictability into its four dimensions: quality, cost, schedule, and scope. You will discuss each of these dimensions in detail taking account of contributing factors that impact each area.  Finally, you will explore ways to manage each dimension including methodologies, techniques, and tools that assist organizations in implementing complex business transformation programs with better predictability.

  • Understand the need for and challenge of predictability
  • Learn how to balance the interdependencies between the dimension of predictability
  • Discover tools and techniques that a test manager can use in managing large testing programs

About the Workshop Leader...
Srinivas Polisetty is a practice lead at Infosys. Having more than 18 years of experience in the IT industry, he specializes in building end-to-end testing solutions for major insurance/healthcare and life sciences accounts. Srinivas’ experience ranges from performing assessments and gap-analysis to understanding the current state of the testing organization and building implementation roadmaps to mitigate gaps and achieve cost, quality, and time to market optimization. He holds a Bachelors degree in electronics and communication engineering and a Masters in Business Administration. He is PMP certified and has presented at QAI, winning the 'Best of the Best' award in QAI's 6th International Software Testing Conference held in New Delhi.

 Back to top

WEDNESDAY, OCTOBER 28 - AFTERNOON

Software Testing Process Improvement Challenges and Realistic Approaches
Pedram Faghihi & Ayal Bida, Aviva Canada

Track 1: 3:00 - 4:30

A chaotic approach to software testing can not be acceptable to an organization considering the depth of damage that can be inflicted on a business by faulty testing. Instead, testing practices must be reconsidered to include early engagement, which requires coordination and having defined and agreed upon processes in place. In this workshop, the real challenges of defining and implementing processes will be explored through an actual case study that reflects the strategies of selling the desired processes. Industry standards and maturity models may be used as guidelines, but they do not consider the impact of pushback in real life situations. The culture of an organization and diversity of interests represented are among the hidden and unpredictable factors affecting the deployment phase. Join Pedram as he leads you through the steps involved in real change by focusing on the human factors and ways of dealing with them.

  • Discuss the real facts about software testing process improvement approaches
  • Learn a framework for providing a practical solution to maximize the success of the process improvement initiative.
  • Participate in a hands-on experience scenario

About the Workshop Leaders...
Pedram Faghihi is an internal Process Analyst and Training Specialist Lead with Aviva Canada. His duties involve software process improvement analysis and implementation as well as development of curriculum and customized corporate training courses and materials. Pedram holds a M.Sc. in Engineering, Hon. CPD in Computer Programming and Analysis, and has over 10 years of academic experience as a college and university instructor. He has been actively involved in critical projects within variety of industries, including telecommunications, insurance, and retail. He is the founder of the advanced software testing curriculum at Centennial College.

In his current role at Aviva Canada, Ayal Bida is Manager for Quality Assurance of the newly implemented Testing Centre of Excellence. This group's mandate is to ensure adherence to testing methodology and to assist business units and projects alike to successfully implement code with minimal impact to the user community. In past roles, Ayal worked to implement an IT development life cycle process and later provided compliance oversight to ensure adherence to the new process framework. Ayal has also taught Computer Studies at Seneca College, working for 5 years as an Instructor in the Continuing Education department.

 Back to top

Two Futures of Software Testing
Michael Bolton, DevelopSense

Track 2: 3:00 - 4:30

Niels Bohr, Woody Allen, or Yogi Berra (perhaps all three) once said "Prediction is very difficult, especially about the future." Michael rises to the challenge and dares to present TWO futures of software testing. In one vision, testers are the gatekeepers, responsible for assuring product quality. Testing follows a rigorously controlled process. Changes to the product are resisted so that risk can be eliminated. This is the dark vision of the future. In the other view, the bright future, testers are active investigators, critical thinkers, and highly skilled, valued members of the project team. Testers neither accept nor desire responsibility for releasing the product; instead, they provide important, timely, credible information to managers so that sound and informed business decisions can be made. Most importantly, testers embrace challenge and change. Where we now and where are we going? In this interactive presentation, Michael shares his visions of the future. The presentation includes a brief exercise and dialogue, encouraging lively discussion and debate from the floor.

About the Workshop Leaders...
Michael Bolton has been teaching software testing for the last eight years across five continents. He is co-author, along with senior author James Bach, of Rapid Software Testing, a course that presents a methodology and mindset for expert software testing in uncertain conditions and under extreme time pressure. Michael is the Program Chair for TASSQ, the Toronto Association of System and Software Quality, and a co-founder of the Toronto Workshops on Software Testing. He is a regular columnist for Better Software Magazine and also writes for Quality Software, a magazine published by TASSQ. Michael lives in Toronto, Canada, with his wife and two children.

 Back to top

Workshop: Modeling Scenarios Using Data
Fiona Charles, Quality Intelligence

Track 3: 3:00 - 4:30

Many test efforts depend on scenarios that represent real sequences of transactions and events. Scenarios are important tools for finding problems and often, they are essential for business acceptance because they encapsulate test ideas in a format that is meaningful for business users. User stories, use cases, and business requirements can be good sources for scenario test ideas. Testers know, though, that these are rarely comprehensive or detailed enough to encompass a thorough test. And, if we base our test model entirely on the same sources used by developers, our test will reflect the assumptions made in building the system. There is a risk that errors arising from misinterpreted or incomplete requirements will be missed. One way to mitigate this is to build a test model whose foundation is a conceptual framework based on the data flows. Scenarios can then be created by doing structured analysis of the data. This method helps to ensure adequate coverage and testing rigor, it provides a cross-check for other test ideas, and, because it employs a structure, it facilitates building scenarios up from reusable components. In this interactive workshop:

  • Learn to develop and use a framework based on data to model a scenario test
  • Learn how to develop building blocks for flexible assembly of structured scenarios
  • Understand the benefits, limitations and disadvantages of the method based on an actual project

About the Workshop Leader...
Fiona Charles is a Toronto-based test project manager and consultant with 30 years experience in software development and integration. Through her company, Quality Intelligence, she works with clients in diverse industries to design and implement pragmatic test and test management practices that match unique business challenges. Fiona is on the TASSQ board. She is co-founder and organizer of the Toronto Workshop for Software Testing, an annual invitational conference for senior test practitioners. Fiona edited The Gift of Time (2008, Dorset House), is a frequent contributor to StickyMinds.com and Better Software magazine, and presents regularly on testing and test management topics.

 Back to top

THURSDAY, OCTOBER 29 - MORNING

Debunking the Myths of Test Automation
Kevin Burr, Consultant

Track 1: 11:00 - 12:30

It is one of the sad lessons that management and testers have learned. It takes far more time to automate a test than to do it by hand. Many groups have discovered test automation is untrustworthy, a time and money pit, or, at best, only usable in very limited circumstances. The saddest thing of all is that this belief is entirely untrue! Kevin has experienced teams running double the number of test cases per week through automation. He has witnessed the extreme of a single tester creating and running hundreds of test cases on multiple test beds simultaneously in a performance testing scenario. This success is not tied to specific tools, but rather to a number of simple techniques that not only allow for faster creation of test cases but also make it easer to reduce the amount of testing required.

  • Discover automation strategies including data driven testing and generation of tests from models.
  • Understand how to use risk metrics to drive effectiveness,
  • Explore realistic process improvement to make sure you succeed.

About the Workshop Leader...
Kevin Burr has twenty years of software development experience and ten years of management experience in software quality, security risk analysis, test automation strategies, unit and usability testing, performance, and interoperability. Kevin has managed performance, integration, and system testing groups in Nortel, as well as having created the Test Acceleration Consulting Group within Nortel's Software Engineering Analysis Lab. Currently, Kevin is in the Technology Innovation Management Program at Carleton University and is certified as an Information Systems Security Professional.

 Back to top

Exploratory Testing in Agile with Visual Studio Team System 2010
Aaron Kowall, Imaginet Resource Corporation

Track 2: 11:00 - 12:30

Exploratory testing is defined by many as "simultaneous learning, test design, and test execution." It is well suited to follow the agile manifesto principles, but within the agile literature the test techniques most often advocated relate to test automation. In this workshop, Bertrand will explore the fact that exploratory testing is not "ad hoc" testing, but rather a highly structured methodology with a specific set of objectives. It is a structure that provides results. Bertrand will demonstrate how the Microsoft testing toolset supports the exploratory testing objective within an agile software development process making this technique a highly effective and very efficient complement to test automation.

  • Learn how exploratory testing is related to the principles of the agile manifesto
  • Learn the best practices and sound principles of exploratory testing
  • See how the latest tooling offered by Microsoft Visual Studio Team System 2010 supports this process

About the Workshop Leader...
Aaron Kowall is the ALM practice lead at Imaginet Resource Corporation, a custom software development and ALM consulting firm founded in Winnipeg, Manitoba.  His current professional focus is mentoring organizations to improve their application life cycle management processes.  Aaron has over 16 years experience in the application development and IT arena.  His drive and determination have made him a sought after leader in the field.

 Back to top

Kick Start Local Workstation Testing!
Stelios Pantazopoulos & Eric Liu, ThoughtWorks, Inc.

Track 3: 11:00 - 12:30

A common belief among most IT organizations is that a system must be validated in an integrated pre-production test environment. The controlled, shared, and complex nature of such environments makes shortening acceptance test cycles an extremely difficult proposition. A solution to this problem is for the majority of testing to be done instead in a rich, fully functional, self contained build of the system deployed to tester workstations. Learn how this approach can change your test strategy. Follow along on a sample code base to understand what architectural decisions are required to support this process. Understand how to orchestrate and execute testing using this technique and review a cost/benefit analysis from a real world project. Stelios and Eric will carry out a live demonstration of the workstation testing techniques on a system build especially for this session. You will be given a USB memory key containing a complete copy of this system, as well as other artifacts from the workshop. Attendees are encouraged to bring their laptops.

  • Discover how to architect a system to enable the implementation of a fully functional, self contained build that can run independently on a workstation.
  • Learn how several types of testing can move to the workstation build.
  • Identify the types of testing to remain in the pre-production test environment.

About the Workshop Leaders...
Stelios Pantazopoulos is a lead consultant with ThoughtWorks, Inc. He has 12 years of IT professional services experience on a variety of different projects. He has filled roles including project manager, iteration manager, quality management lead, and developer. Stelios authored a chapter in the ThoughtWorks Anthology entitled "Project Vital Signs" focusing on ways to provide teams with greater visibility into project health. Stelios presented an experience report to the Agile 2008 conference on the subject of automated functional testing.

Eric is a lead consultant and senior developer at ThoughtWorks Inc. He enjoys working on enterprise applications and searching for better methods to deliver software as a team. He has over seven years of industry experience on agile projects in both .NET and Java platforms. Eric holds an MSc degree in software engineering from the University of Calgary.

 Back to top

THURSDAY, OCTOBER 29 - AFTERNOON

Distributed Testing: An Agile Approach for Offshore Outsourcing
Priya Kalyanasundaram, Cognizant

Track 1: 3:00 - 4:30

One of the key principles behind XP testing and software development is effective communication and transparency between clients, QA testers, software programmers, SME's, and the end users. Agile methods emphasize repeatedly the need for real-time communication, preferably face-to-face. This physical proximity is one of the several reasons why agile techniques are so effective. But, considering the factors of cost savings and faster time to market, the same approach cannot be taken for large or lengthy projects with the expectation of gaining business value. One of the viable solutions to address this situation is the implementation of "Distributed eXtreme Programming Testing." This workshop will explore the need for distributed XP testing. Priya will provide a primer on the key challenges of implementation and illustrate approaches to addressing the challenge. She will discuss her experiences with distributed XP testing and show how you can stay true to the values and principles of XP in a distributed environment.

  • Discover how Distributed eXtreme Programming Testing can benefit your projects.
  • Learn to address the issues of implementing this approach.
  • Confront the problems of maintaining an agile approach in a distributed environment.

About the Workshop Leader...
Priya Kalyanasundaram is a highly seasoned, solutions-driven professional with over 15 years of IT experience and a successful track record of delivering QA and test management solutions for Fortune 100 clients. She is experienced in a variety of industries including insurance, financial services, and retail. Priya has been associated with Cognizant since 1995. She is a thought leader in the field of software testing and is a regular presenter at conferences including QAI, SQS, Swiss Testing Day, and Laboratory of Quality Software. Priya holds a bachelors degree in electronics and communication engineering.

 Back to top

Cost Based Performance Modeling: Addressing Performance Uncertainties
Eugene Margulis, Telus Health

Track 3: 3:00 - 4:30

Traditional performance evaluation methods, such as "big system test" or instrumentation, are costly and do not deal well with the inherent performance related "uncertainties" of modern systems. These uncertainties include requirements that are either unclear or change from deployment to deployment, 3rd party code to which one has no direct access, and the variable hardware platform. These uncertainties make exhaustive testing impractical and the "worst case" testing results in engineering more characteristic of the "worst impossible case" rather then a realistic customer scenario. Creating a single model that is based on traceable and repeatable test results of individual system components or transactions saves a huge amount of effort and cost on performance related engineering and QA and provides an almost instantaneous "what if" analysis for product planning. In this interactive workshop you will step through the process of creating cost based performance models and gain an understanding of how this performance analysis method can work for you.

  • Learn how to effectively addresses the key uncertainties of performance evaluation
  • Gain the ability to obtain performance and capacity estimates for key product functionalities throughout the entire development cycle, often even before the first line of code is written
  • Facilitates iterative feedback on performance enabling continuous improvements
  • Enable quick and inexpensive development of end-user capacity planning and performance analysis tools

About the Workshop Leader...
Over the last 15 years, Eugene Margulis has been working on capacity and performance analysis and evaluation at Nortel. He has been involved in design, architecture, and QA of the telecommunication systems from hard real time call processing to network management. At present, he is responsible for all the performance and capacity related architecture, design, and verification aspects of Nortel's network management platform.

 Back to top





QAI Global Institute - Copyright © 2009
Privacy Policy - Terms of Use - www.qaitestrek.org