Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. All literature included in the was obtained from publicly available sources.
Organizational readiness assessments have a history of being developed as important support tools for successful implementation. However, it remains unclear how best to operationalize readiness across varied projects or settings. We conducted a synthesis and content analysis of published readiness instruments to compare how investigators have operationalized the concept of organizational readiness for change.
We identified readiness assessments using a systematic review and update search. We mapped individual assessment items to the Consolidated Framework for Implementation Research (CFIR), which identifies five domains affecting implementation (outer setting, inner setting, intervention characteristics, characteristics of individuals, and implementation process) and multiple constructs within each domain.
Of 1370 survey items, 897 (68%) mapped to the CFIR domain of inner setting, most commonly related to constructs of readiness for implementation (n = 220); networks and communication (n = 207); implementation climate (n = 204); structural characteristics (n = 139); and culture (n = 93). Two hundred forty-two items (18%) mapped to characteristics of individuals (mainly other personal attributes [n = 157] and self-efficacy [n = 52]); 80 (6%) mapped to outer setting; 51 (4%) mapped to implementation process; 40 (3%) mapped to intervention characteristics; and 60 (4%) did not map to CFIR constructs. Instruments were typically tailored to specific interventions or contexts.
Available readiness instruments predominantly focus on contextual factors within the organization and characteristics of individuals, but the specificity of most assessment items suggests a need to tailor items to the specific scenario in which an assessment is fielded. Readiness assessments must bridge the gap between measuring a theoretical construct and factors of importance to a particular implementation.
Keywords: Systematic review, Organizational readiness for change, Content analysis, Implementation research, Consolidated framework for implementation research
The rapid growth of multi-disciplinary fields, including implementation science, brings along with it the propagation of more terminology [1, 2]. While some of these terms may represent unique ideas, there are also many examples of the Jingle and Jangle Fallacies [3, 4]. The Jingle Fallacy, also known as synonymy, occurs when multiple names are used to refer to the same concept or thing (e.g., practice facilitation and coaching). Conversely, the Jangle Fallacy, or polysemy, occurs when the same name is used for different concepts or things. For instance, a “practice” in healthcare could refer to a medical organization (e.g., there are three doctors at this practice) or a strategy or process (e.g., a care management practice to manage chronic illness).
The seemingly self-explanatory concept of “organizational readiness for change” actually falls prey to both the Jingle and Jangle Fallacies. In the former case, we do not yet have good distinctions between assessing “organizational readiness for change,” “needs,” “barriers and facilitators,” or “factors affecting implementation” [5]. An earlier systematic review on organizational readiness for change found that relevant literature, in addition to discussing “readiness”, used terms like “preparedness”, “willingness”, “commitment” and “acceptance” [6].
The Jangle Fallacy also applies in that “organizational readiness for change” has been defined and measured in different ways. Some definitions and measures focus on the characteristics of individuals within an organization, as demonstrated by this definition from Weiner and colleagues: “the extent to which organizational members are psychologically and behaviorally prepared to implement organizational change” [7]. Others focus on macro-level factors, such as collective commitment or collective efficacy, and define organizational readiness for change as “a comprehensive attitude” that incorporates factors at an organizational level [8].
In the absence of a consensus on a conceptual framework for organizational readiness for change, knowing what needs to be included in such an assessment may remain a challenge [9]. Theorists in implementation science have an interest in refining and standardizing the measurement of organizational readiness for change to improve conceptual clarity, comparison across sites and studies, and predictive validity. In practice, however, using an existing measure may be challenging. Some assessments are developed with a particular setting or intervention in mind [6], for example, specific to addiction treatment [10], or describing transitions related to a hospital relocation [11] which can makes them less generalizable. On the other hand, broader assessments, in their attempts to be inclusive, may be lengthy or imprecise and thus require adaptation to meet the needs of a given context.
Our work began as part of the US Department of Veterans Affairs Health Services Research and Development (HSR&D) Care Coordination Quality Enhancement Research Initiative (QUERI) program. One of our aims was to use readiness assessments across three different projects to improve care coordination in VA and compare their predictive validity regarding implementation outcomes. We began by searching for existing assessments and discovered that a team at St. Michael’s Hospital in Toronto had created the Ready, Set, Change! decision support tool to help researchers identify existing assessments that would be best suited for their studies [12]. The Ready, Set, Change! team included assessments from a 2014 systematic review [6] that met pre-determined criteria for validity and reliability. The recommended assessments from the decision support tool, however, were not suitable for our needs without adaptation, due to their length and lack of relevance to our specific context and intervention details.
In response to this experience, we set out to review existing measures of organizational readiness for change to see how others had operationalized the concept. We then engaged in content analysis to identify core concepts, mapping them to the Consolidated Framework for Implementation Research (CFIR) [13]. CFIR provides a broad range of constructs relevant to implementation research and allowed for comprehensive description and comparison of the explicit and implicit definitions and frameworks underlying identified readiness assessments. Because we anticipated a range of organizational readiness definitions and measurement approaches, we chose CFIR as a broad framework that would likely capture the various permutations organizational readiness assessments were likely to take, even when they did not overlap with each other or any one organizational readiness for change framework. In so building on prior work [6, 7, 12, 14], our objective is pragmatic: to support developers of readiness assessments in determining key topics they may want to keep in mind when tailoring or developing an assessment outside the purview of existing assessments.
Our approach involved multiple steps. First, we used systematic review methods to update the database searches conducted by a prior review of organizational readiness for change assessments to identify any additional relevant assessments. Then, we built an item bank composed of individual items included in the readiness assessments identified. Finally, we used directed content analysis to sort items into categories using CFIR as our initial foundation [13]. This systematic review is reported according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, see Additional file 1 for PRISMA checklist [15].
We built upon the literature search conducted by Gagnon and colleagues as part of their 2014 systematic review of organizational readiness instruments [6]. Because this review of organizational readiness assessments used a search conducted in 2012, we updated the search through June 14, 2017. This broad search was based on terms related to readiness, change, and health or social services within six databases: Web of Science, Sociological Abstracts, PubMed, PsycINFO, Embase, and CINAHL (see Additional file 2 for full search strategy). We found additional studies by mining identified literature for relevant references, as well as by expert suggestion.
Two team members (IML, DMD) independently screened all identified titles and abstracts in duplicate. For potentially relevant abstracts, we retrieved full-text articles and reviewed them independently in duplicate as well, with discrepancies reviewed by the full team. To be included, the actual assessment used, with a full list of individual items, needed to be available for each full-text publication. This assessment needed to be relevant to healthcare delivery settings and to measure organizational readiness for change. Because, as noted above, organizational readiness for change is a nebulous concept, the measure had to capture a general sentiment of willingness, readiness, or acceptance for an organizational or collective change or innovation (rather than personal behavior change, e.g., for smoking cessation). Multiple studies using the same assessment could be included if they represented unique data collection with separate samples of participants, since each use constituted an operationalization that could inform our research objective. By including duplications and variations, we were better able to describe the uses of each assessment, including contexts in which each assessment was used, if the assessment was altered, and whether assessments were collected alongside additional measures.
We transcribed all individual questions or items from included publications into a database that served as an item bank. We captured information about each included publication, including the name of the assessment used (when reported), total number of items in that assessment or assessments, study setting, study sample, type of intervention, and any additional data collected for the study (e.g., other screeners or surveys, interviews, patient records). For items that appeared multiple times, we made separate entries in the database for each unique appearance (i.e., when one assessment was used by multiple studies in part or in whole). We did not conduct a quality assessment of the included studies, since our analysis was not focused on the validity or robustness of study findings.
We used directed content analysis to identify themes within the readiness assessment items in our database. Directed content analysis builds from existing theory, models, or frameworks, which can provide the initial coding structure [16]. Beginning with these predetermined codes, all data is coded to the extent possible. Analysts then identify data that cannot be captured by the existing coding structure and develop new codes, or sub-codes of existing codes, to better capture how the existing theory, model, or framework is supported and extended by the data.
Because of the conceptual fuzziness surrounding organizational readiness for change, we sought a comprehensive framework to which we could map items in the item bank, and selected CFIR, which includes five domains within which 39 constructs are nested [13]. The “intervention characteristics” domain includes eight constructs such as relative advantage and cost of the intervention. The “outer setting” domain includes four constructs for factors outside an organization (e.g., external policy and incentives). Within the “inner setting” domain are five constructs: structural characteristics, networks and communications, culture, implementation climate, and readiness for implementation. These last two constructs are also broken down into sub-constructs, with six sub-constructs nested under implementation climate and three under readiness for implementation. The fourth domain is “characteristics of individuals,” which houses five constructs. The final domain of “process” is comprised of four constructs: planning, engaging (which has sub-constructs for four different groups of individuals who may be involved in the implementation), executing, and reflecting and evaluating. For a delineation of how the framework was applied in this analysis, and exemplar items from the item bank, see the codebook in Additional file 3. We iteratively developed the codebook based on the existing framework to clarify our application of the CFIR construct definitions and any modifications we made. For example, based on the CFIR definitions, we limited certain CFIR constructs to intervention-specific items (e.g., the construct “available resources” was used for project-specific resources), whereas other CFIR constructs were exclusively used for items that described general characteristics (e.g., the construct of “structural characteristics” was applied for items describing organizational resources more broadly).
Two members of the study team independently coded each item with a CFIR construct, or sub-construct where possible. All discrepancies were reconciled by these two members or the larger team when necessary. We categorized nearly all items under a CFIR construct or sub-construct. We developed one new construct-level code to capture items related to leadership qualities that were not intervention-specific. These items did not fit into the CFIR categorizations, as the existing representation of leadership within CFIR was in sub-constructs related to engagement of leadership with a specific intervention, as opposed to a more general description of an organization’s leaders. Some additional items that were project-specific were excluded from coding (e.g., “12-step theory (AA/NA) is followed by many of the counselors here” [17]). When more than 50 items were coded to a CFIR construct that did not have specific sub-constructs, a pile- sort methodology was used to develop new sub-constructs; this allowed us to better characterize the diversity within these large constructs.
In the case of the networks and communications construct, we used an additional model from Lanham and colleagues to classify the sub-constructs, since emerging sub-codes aligned with characteristics of work relationships that Lanham and colleagues had previously identified [18–20]. CFIR defines the networks and communications construct as being about relationships: “the nature and quality of webs of social networks and the nature and quality of formal and informal communications within an organization” [13]. Specifying sub-constructs using an established model for work relationships therefore had face validity.
The Lanham model was developed with a focus on relationships in healthcare delivery settings; applications of the model suggest that these relationship characteristics should be considered during improvement efforts or redesign [19, 20]. The model includes seven characteristics, of which five emerged within these data and were therefore applied: relatedness, trust, respectful interaction, heedfulness, and mindfulness. Full descriptions of these five characteristics are provided in Additional file 3. We generated additional inductive sub-constructs to capture emergent themes in the items within the networks and communications construct that fell outside the relationship model.
In coding each item, we relied on the most granular code appropriate (e.g., using subcodes where appropriate), and noted the unit of measurement: “self,” “staff,” “leadership,” or “organization.” “Organization” was the default if the unit of measurement was ambiguous. Additionally, we recorded information on whether the item referenced implementation of a specific intervention, rather than a general question about the state of the organization or individual. See Additional file 4 for the coding form. Once all items were coded, we narratively summarized our findings to describe the operationalization of organizational readiness for change within the included assessments and studies.
The total number of publications included in our analysis is 27, which represents 29 uses of readiness assessments. From the 29 of organizational readiness assessment uses, 1370 individual assessment items were included in the item bank. See Fig. 1 for literature flow.