The practical differences between data governance and data management
Lawrence Giordano and Patrick Onions, September 2021
This paper explores whether data governance is distinct from data management in practice, or whether organisations treat it as a catch-all for operational data problem-solving. Literature suggests this occurs and may be a widespread phenomenon, and one framework even recommends that practitioners define data governance for their organisations.
The authors’ experience in eight roles confirms that data governance and data management are treated as interchangeable. This experience is assessed by comparing practice against the DMBOK framework, modelling the approaches taken by organisations, and examining the roles offered by advertised vacancies over a six month period.
Five distinct approaches to data governance were identified. Organisations may be wishing to establish data governance, remedy data quality issues, achieve regulatory compliance, support future tooling, or support an insight function. In each scenario data governance was found to be largely reactive, focused on operational data problems, and performing hands-on data management. Five fundamental and interrelated causes for this were identified. Data governance is not well understood, literature and practice differ markedly, and data governance and data management are treated synonymously. Data governance is seldom correctly established, and its goals and expectations are often misaligned.
Outcomes may be improved at any stage in the data governance function’s lifecycle, although these work best when baked into the function from inception. We advocate a better understanding of the distinction between data governance and data management. This should be followed by clearly defining the goals, and making an informed choice between governance and management (or both). For example, data governance is more effective as a strategic function, whilst data management is a more suitable option where the expectation is to deliver localised data improvements. Any governance should be positioned appropriately in the organisation, granted the requisite authority and influence, and supported with sufficient data management capability and capacity to achieve the desired goals.
Thanks to Simon Truran and Andrew Sharp for their valuable feedback!
What is data governance?
A brief search of academic literature reveals dozens of definitions, models and frameworks for data governance; a dizzying A-to-Z array of models and frameworks, many with real overlap, semantic and terminological differences, and even genuine novelty. This has been observed by other researchers. A decade ago data governance definitions were seen as emerging or absent (Begg and Caira, 2012; Otto, 2011), and that situation persists (Benfeldt et al., 2020). There is a “lack of established terms” and an “ever-changing theme” (Micheli et al., 2020). This evolution is understandable and even necessary given the complications of new technology; such as the gender bias that Google Translate applied to Finnish gender neutral statements (Ullmann, S et al., 2020).
Vibrant academic debate stimulates a field, encourages growth, tests ideas, and produces a solutions’ toolbox. However, practitioners do benefit from a solid theoretical foundation. Nebulous concepts are difficult to initiate and implement, leadership cannot afford philosophical navel gazing, project teams cannot afford vagueness, and there is too much opportunity for misunderstanding. Ultimately this can lead to functions that are misaligned with stakeholder expectations, inappropriately funded, incorrectly staffed, ineffective, doomed to fail if there is no appetite or opportunity for realignment, and even perform functions that are in no way governance-related.
It is beyond the scope of this paper to review and compare such a large body of work, and perhaps unnecessary. The Data Management Association (DAMA) is an international body that has produced a ‘bible’ called the Data Management Body of Knowledge (DMBOK, 2017) which is arguably the most authoritative source for definitions. Written by expert authors and extensively reviewed by academics and practitioners; DAMA is a global body with a large membership, DMBOK is based on and incorporates a substantial volume of theory, is widely referred to, enjoys widespread use, is widely cited, familiarity with its contents is frequently a certification requirement and a job requirement, and the document is extensive and conveniently accessible. This paper therefore relies on definitions and activities as set out in DMBOK.
Data governance is “the exercise of authority and control (planning, monitoring and enforcement) over the management of data assets” (DMBOK, p.67). The primary governance activities are listed in Table 1 below, and these apply to people, processes and technology.
As defined, data governance applies to the ‘management of data assets’, which is “the development, execution, and supervision of plans, policies, programs, and practices that deliver, control, protect, and enhance the value of data and information assets throughout their lifecycle” (DMBOK, p.17). Data management covers ten knowledge areas, represented as spokes in the DAMA Wheel (p.35) with governance at the hub; Data architecture, Data integration and interoperability, Data modelling and design, Data quality, Data security, Data storage and operations, Data warehousing and business intelligence, Document and content management, Metadata, and Reference and master data.
Comparison of these two definitions suggests a nuanced relationship between governance and management. One interpretation is that data governance provides direction and oversight, whilst data management ensures execution. Another interpretation, which corresponds with opinions of authors like Plotkin (2020, p.2), is that data governance is about organising people to ensure data is fit for purpose — rather than managing the data itself. As defined there is no specific mention of a hierarchical relationship, and the boundaries between their respective activities can be blurred; subtle distinction that arguably leads to confusion in practice.
Data governance in practice
The authors are data governance practitioners with experience in several large enterprises in the United Kingdom. We agree with Plotkin (2020), who says “Ask a room full of data governance practitioners what data governance means, and you’ll probably get as many definitions as there are people.” We also agree with some literature that observes the disconnect between theory and practice, lack of definition, and a lack of common understanding. In particular we concur with Goetz (2015), who asks; “Since when did data management and data governance become interchangeable?”
Three methods were chosen to reflect on and understand our opinions of this phenomenon. We mapped our experience against the list of DMBOK activities, compared duties in a set of recruitment adverts, and derived a model of data governance approaches from our work.
Mapping experience to DMBOK activities
The authors cumulatively worked in eight data governance positions in seven large organisations employing over 10,000 people in the United Kingdom. A loose comparison against the seventeen governance activities described by the DMBOK (pp.79–91) reveals the authors were engaged in only a moderate percentage of those.
The authors’ experience at their respective employers is provided in Table 1 below. A score of 1 was given for each employer that performed the corresponding DMBOK activity, regardless of the extent to which that was performed by the author, the similarity of the activity to DMBOK definitions, or the degree of success that was achieved. Activities not on the DMBOK list were ignored for this analysis. Totals are a sum of the scores, out of number of organisations multiplied by number of activities (4 employers * 17 activities for each author). Correlation is the value of the CORREL function between the two data sets, showing a weak relationship between the two sets of data.
Results of this analysis will be considered in the Discussion section.
Table 1. Author experience mapped against DMBOK data governance activities
Mapping experience to approach
Each of the authors’ employers have practiced data governance differently, and to understand this we sought to reflect on the ‘approach’ taken in each case. Approach is the label used here to identify the strategy and general characteristics of the data governance function; such as how data governance was understood by the organisation, the underlying philosophy and rationale, purpose and expectations, activities, and how data governance was intended to deliver value.
Five approaches were inductively derived from our experience, inferred from the purpose and nature of our daily activities rather than from the wording of policies, vision statements or strategies. Each organisation was scored with for every approach that substantially resembled its data governance function, and some organisations displayed multiple approaches. Approaches and scores are provided in Table 2 below. Only cases where the approach was the dominant activity were scored, not governance activity that was either nominal or abandoned by the teams. The five-approach model and scores do not reflect factors like different stages of maturity, prior history of the function, scope of the function (departmental, enterprise, or inter-enterprise), and the authors’ roles in establishing or guiding the approach taken.
Table 2. Number of organisations characterised by approach
Establishing data governance
This approach describes situations where our employers were in the process of adopting data governance.
Establishing data governance most often consisted of defining data governance, setting out policies, strategy and an operating framework, and initiating basic activities. In some cases the authors were responsible for setting the direction of data governance for their employers, and in others performed duties already established. Of the former, this may have been the organisation’s first foray into data governance, or entailed reigniting previously failed attempts, or refreshing and realigning an existing function.
Reflection suggests these organisations were typically not prepared for the introduction of data governance, and it tended to be regarded as a silver bullet when decision-makers were unfamiliar with the difference between data governance and data management. Expectations were often high, particularly of the speed and impact of change, but budgets did not extend further than the data governance manager’s salary. The organisation structure did not afford data governance any authority or influence, and data governance positions were not senior enough to drive change, or engage directly with senior leadership. There was seldom any provision for data management, and organisation culture meant disparate teams tended not to be receptive to monitoring and control, even regarding data governance as an impediment. Consequently any data management and improvement was hard-fought and had to be achieved through negotiation, persuasion, and personality.
Remedy for data quality
Data governance may be used as the solution for data quality problems.
Data quality issues affect large organisations, and these can attract senior management attention. The organisation may be tempted to respond to incidents such as major outages, expensive mistakes, or burgeoning complaints by establishing a data governance function with the purpose of improving data quality, or tasking the existing data governance function with finding a data quality solution.
Our experience includes data governance that was tasked with improving data quality; including checking and amending accuracy and consistency of estate data, and correcting and cleaning product data in multiple enterprise systems. This did involve assuming stewardship and even ownership of the relevant data sets, and ‘hands-on’ responsibility for delivering tactical data quality improvement. Both of the authors have followed the 5 step process of discover, profile, validate, cleanse, and monitor; utilising SQL queries, in-house tools and off-the-shelf products to support and even deliver data quality endeavours.
Reflection indicates numerous factors may have contributed to data governance taking this approach. These were complex data landscapes, with poorly documented data and technology, low IT maturities, organisation silos’ power were moderate to powerful, shadow IT existed in departments, there were budget constraints, scant provision for data management, low provision for operational phase data costs in project and product roadmaps, and the organisation cultures favoured short-term “just get it done” solutions. Data governance was obliged to manage and even deliver data quality as it was seen as convenient, visible, and staffed with skilled and responsibly-minded people who understood data and data quality. Data governance lacked the organisation position, authority, or sufficiently robust and “saleable” business case to push back or motivate the more conventional or strategic role for governance. Even in those cases where a strong financial benefit for separate data management could be articulated, solutions were frequently thwarted by disagreement over which department would fund the activity and which departments would claim the benefits.
Remedy for regulatory compliance
Data governance may be focused on achieving compliance with data regulations.
Most industries are having to adapt to an ever-increasing range of regulatory compliance. Regulations have been imposed on financial services data for decades, and this has evolved with additional layers and complexity, such as KYC (“know your customer”) and anti-money-laundering. New legislation has also been introduced that has affected niches, such as product safety; or has had an ubiquitous impact, such as data protection. Achieving compliance may be trivial or may have a profound effect on every business area, every environmental factor, and every knowledge area on the “DAMA Wheel” (DMBOK, p.36).
Our experience is that data governance may be given responsibility for achieving regulatory compliance. This experience is garnered from two sectors, three if Data Protection is considered to be a specialised form of data governance.
Data governance at one retailer was tasked with removing non-compliant personal data stored in multiple systems. Activities included interpreting legislation, identifying relevant data, identifying data owners and persuading their involvement, selecting appropriate technology and people, specifying purge criteria, ensuring accuracy of query designs, planning the purge, seeking authorisation and raising change requests, tasking individuals, monitoring progress, and reporting outcomes.
Financial sector data governance experience reveals that data governance and regulatory compliance were distinct, even to the extent of having different functions to support regulatory compliance. Activities focused on mapping data using BPMN technology across the technology estate, including data design and usage across the organisation. All critical data points and data flows were identified and modelled down to the physical layer using industry tools such as Ardoq and Ab Initio. Much of this was manual, even though there has been emerging technology which specialised in process mining through varying degrees of automation, which would have supported or even replaced the BPMN manual mapping carried out by the function.
Compliance, particularly within the financial sector, introduces complex requirements in understanding where data resides and how its utilised. These skills were scarce, and in cases the existing business units were unwilling to take on the burden and responsibility for extra compliance. Enterprise data was managed at the application level, and there were no resources for managing data across system and business unit boundaries. Compliance teams were not technical, individuals in the organisation were risk averse, and few were comfortable with — or knowledgeable enough — to take responsibility for making data changes. Consequently data governance was perceived as competent and available. Our understanding is that data governance in these cases helped segregate responsibility, although this required more coordination of efforts to correctly manage the overlap.
Support for future tooling
Data governance may be supporting large technology projects, often providing data management.
Large enterprises change their applications, underlying technologies and platforms. Examples include migration to cloud-based solutions, migration to ERP, mainframe decommissioning, ERP decommissioning, and fundamental technology like adopting new databases. Newer technologies are also driving added complexity and paradigm changes. The shift from application-specific development to a service or micro-service orientated environment can improve scalability, deployment and fault isolation (Nemer, 2019).
Data is a key driver, factor, and obstruction in all of these scenarios. The data landscape increases in complexity such as with parallel systems and the dispersal of data assets through a complex web of integrations and connectors. This imposes new requirements on how data assets are governed, the skills required to define principles and policies for logic such as APIs and complex data integrations, and on the ‘business’ ownership model.
In our experience, data governance should play a strong supporting role in planning, design, and implementation of future tooling. However, we also note that governance has been seconded into delivering data management on these projects. One large retailer required mainframe data to be modelled to support application migration to the cloud. Initially providing guidance on method and controls, data governance eventually took on responsibility for modelling key data-sets and applications down to the physical level. Another large retailer approached a cloud migration project as ‘analytics first’, moving the enterprise data warehouse from legacy Teradata into GCP. They operated on a service-based architecture, which allowed data governance to embed itself into new service design. Although meeting strong early resistance due to perceptions of slowing down development and contributing to ‘red tape’, this resulted in a more efficiently governed data pipeline.
Staff experience, good skills and low general IT maturity can lead to data governance joining the delivery team for migration projects. On-premise staff may have no experience with migration, and contract teams will lack familiarity with the enterprise architecture, data flows, integration, legacy technology, and even stakeholders. Systems may be poorly documented and data quality sub-optimal, and knowing the technology team becomes vital. Inexperience or lack of situational knowledge inevitably leads to underestimation of complexity, and invariably data management is an after-thought. In these situations it is tempting for organisations to call on data governance to avoid embarrassment or accountability for budget overruns.
Support for insight
Data governance may be used to provide data management support to the analytics function.
Data science and large-scale analytics are driving a demand for clean, structured and accessible data. A data scientist needs time to interrogate and build intelligence to extract value from a data science model, which often requires input from the business and evaluation of available datasets. Rapid adoption and recognition of data science has had knock-on effects, leaving the IT department scrambling for solutions.
Data governance promises to be the silver bullet to the fundamental problems of weak data architecture and poor data quality. This may impose additional requirements on data governance, or may alter its focus entirely. Traditional models based on control and cleansing of master data have expanded to include transactional and reference data. Collaborative decentralised governance, including crowd-sourcing of both definitions and rules for critical data, may be crucial underpinnings for data science. The remit of data governance may also expand to include ethics for AI/ML models, ethics (and associated legality) of processing data, and the fuzzy ethics of data usage and data science.
Our data governance experience includes providing data management services in support of insights. One case involved modelling the data flows from a centralised ERP system into departmental IT databases and spreadsheets, and subsequent use of that data in departmental analytics. Another case at a retailer involved senior leadership making it the focus of data governance to enhance and support the needs of the Data Science function.
Several factors may have influenced the direction that data governance took in these cases. The data governance function was physically located and organised alongside analytics. Enterprise data knowledge was traditionally held at the application level, whilst data governance possessed an enterprise understanding, and data governance was separated from silo- and application-level politics that hindered gathering of enterprise-wide data. Analytics may have been the pressing need, and had the available budget. The data governance teams also possessed skills and knowledge of enterprise data, available to deliver much-needed data management to support the analytics function.
DG approaches in job adverts
In writing this paper we recognised that our insights may be unusual and biased, even though they appeared to concur with some literature. We therefore sought evidence of similar activities and approaches in other organisations and industries. Job adverts provided a convenient and accessible source of insights into data governance, and a larger sample, albeit indirect and subject to interpretation. Our data was derived from unsolicited emails that advertised vacancies to the authors.
Twenty-two adverts for positions with the job titles “data governance manager” and “information governance manager” were received by the authors in the period January to June 2021. These emails were typically sent by company recruiters and agencies who had discovered our profiles on LinkedIn, and content included job requirements and role descriptions. The roles were then classified according to how closely they matched the DMBOK governance activities (Table 1), whether they matched (any) data management knowledge areas on the DAMA wheel, and if the roles indicated the function resembled the approaches in Table 2. Each email was scored for each activity or approach they resembled, and were scored regardless of the mix of DMBOK activities.
Advertised positions predominantly called for staff to either establish data governance, or improve data quality. The majority of adverts included some duties on the DMBOK list of data governance activities, but only one advert strongly correlated with that list. More adverts included duties that appear elsewhere on the DMBOK wheel, particularly data architecture, quality, modelling, and management of reference, master and meta-data. Importantly, many of these activities were ‘doing’ rather than ‘controlling’.
It should be noted that these adverts varied considerably in their depth and specificity. Employers’ approach to data governance could only be inferred from limited information, adverts may not have accurately reflected the duties to be performed, and the authors were likely to receive adverts for positions that matched their careers.
Table 3. Approaches inferred from job adverts
First-hand experience and insights reveal that data governance practice tends to be reactive, operational, focused on data pain, and functions as data management. Analysis of these cases suggests that five characteristics of data governance practice may directly or indirectly lead to these problems.
1. Data governance is not well-understood
Data governance is often misunderstood by organisations, more typically associated with data quality and master data management. DAMA even acknowledges this may occur, and makes provision by including the governance activity “defining data governance for the organisation”.
We recommend that practitioners clarify the concept, determine as early as possible the definition for what is (and is not) data governance, and explain the difference between governance and management to stakeholders. This is especially important during planning and inception of the data governance function. If a viable strawman cannot be designed or early commitment obtained from senior leadership, then it may be better to commence with data management before data governance.
2. Data governance literature differs from practice
Practice does not undertake data governance as described by literature. Typically less than half the governance activities outlined in the DMBOK framework were undertaken by our employers, the remainder comprising data management and similar tasks. This is a consequence of focus, not maturity.
Data governance is not obliged to adopt theory. It is not a chartered profession, there are several frameworks to choose from, and the descriptions and definitions in the DMBOK are quite generic. Data governance should be pragmatic too, and delivering value and improvement is preferable to slavish adherence to a particular approach. However, practitioners would do well to at least understand the differences in literature, be familiar with the variety of frameworks, and how their practice compares. There is nothing so practical as a good theory (Lewin, 1943), so practitioners would do well to appreciate the knowledge, lessons and best practice encapsulated in a good framework will have taken substantial time, expense and trouble to acquire. Adopting a framework is sensible and credible, and moulding best practice to suit the organisation has been a critical success factor for some organisations.
3. Governance and management are seen as interchangeable
Data governance and data management are fundamentally different activities, yet in practice the two concepts are frequently conflated, and the function is inappropriately tasked with remediation activities or data management requirements.
Data governance can be treated as a convenient, quick and ‘free’ solution to data problems. Lacking in power and subject to budget pressures, poor understanding and politics, data governance may be driven to take on data ownership, stewardship and data management roles. Consequently it will tend to become conflicted, ineffective and wavering in its direction, delivery will be limited and stop-gap, wasting time and money, suffering staff morale and turnover issues, and may ultimately fail.
It may be correct to prioritise data management before governance, but transforming data governance into data management rarely ends well. Our advice to practitioners and organisations would be to push back and educate, that there can be no data governance without corresponding data management. Challenging the status quo is frequently a futile exercise, but one that should be attempted.
Theoretically, data governance can be the springboard that introduces data management to the organisation. It may not deliver data quality improvements or model data, but the business may have already formed the idea that governance and management are synonymous. Correcting this view will involve hurdles; like time-consuming education and finding sufficient appetite, budget, capability and capacity to expand the function’s remit. Data governance could seek out and engage with (or integrate) existing data management capability elsewhere in the organisation. However, experience suggests that those teams will tend not to embrace this approach as they may lack the desire to change, prefer their focus and independence, be bound into silos, or not want the additional burden and responsibility. Resistance can be considerable, and only overcome with senior leadership commitment and influence.
Merging the two functions may be an option. A centralised and appropriately managed function that offers both governance and data management can be effective. However, to achieve a modicum of oversight will require operational separation, good management, a conducive organisation culture, and investment in data management that is proportional to the scale of problems and improvement ambitions.
4. Data Governance is seldom established correctly
Data governance is seldom established correctly for the roles it is tasked with, or for growth, and once established it can be difficult to realign. Four attributes are needed to achieve meaningful results.
Organisational positioning will have an impact on the scope, focus and authority of data governance.
For example, data governance as a minor IT department function will have negligible representation and influence in the broader enterprise, and will tend to be drawn into short-term data management issues. Such a position will inherently lack the seniority or authority to drive change, lack credibility and access to promote a vision at a senior level, is vulnerable to siloism and resistance, becomes bogged down in short-term thinking, and lacks routes for problem escalation. Ability to deliver improvement will also be frustrated with a budget that is susceptible to other programmes’ demands and overruns, and IT that is preoccupied with application development, infrastructure maintenance, or projects.
Our recommendation is that data governance be given overt, enterprise-wide support and board-level support. This will ensure data governance has a broad and pervasive impact, is disposed towards cross-functional oversight and improvement, has the reach to encourage ownership and stewardship, and is abstracted from supporting operational problems.
Inappropriate focus will limit data governance’s effectiveness and growth.
Focus issues are common during inception and early development. Data governance can spend an inordinate amount of time on strategising, “boiling the ocean” by tackling all the data problems simultaneously, narrowly concentrating on an application or department’s pain, or fighting fires like topical data quality problems.
We find that it is better to do one thing well, to work within available capability and capacity, deliver tangible value and useful outcomes, and to devote time to strategically embedding data governance and extending ownership and stewardship. A particular initiative may be the catalyst and provide the budget for introduction, but data governance should always keep the enterprise in mind and communicate its growth plan upwards.
Authority and influence are required to give data governance the ‘teeth’ it needs.
Data governance tends to be overly reliant on persuasive presentations and personal charisma. Junior and mid-level managers are frequently expected to rely on charm to achieve complex change, expected to “sell” data governance to the organisation whilst senior leadership pays lip service to their commitments to data improvement. Staff may lack the personal network, influence, capacity or capability, or even the aptitude to do this. Governance and selling involve different skill sets. Sound logic and articulated benefits are long-term tactics that are unlikely to overcome entrenched politics, self-interest, budget pressures, and lack of capacity or capability. New or junior-level functions are powerless to push back on demands for data management, convince the business to take ownership, drive expensive improvements, or independently set standards and value assets. Without any clear authority, governance will likely be ignored by busy project teams, deprioritised in the face of the latest crisis, and struggle to obtain funding proportional to the problems faced.
Our recommendation is to provide data governance with the requisite seniority, authority, commitment, and/or resources. The aforementioned positioning will go a long way to providing this. Enduring, pervasive and visible commitment from senior leadership provides inescapable cues to encourage cooperation; and may take the form of senior sponsorship, regular reporting to a high level, or governance filling key positions in senior control groups. Data governance should also take advantage of topical change agents; exploiting the urgency, visibility and commitment that surrounds situations like new legislation, a crisis, mergers and acquisitions, new enterprise technology, or renewed board-level focus on data.
Staff skills and aptitude must be suited to the chosen data governance approach.
Governance skills can be markedly different to data management skills, and regulated environments require a different mindset to the ‘entrepreneurial’ thinking and data exploitation we have seen in retail. Typical symptoms of misalignment include internal conflict, staff turnover, procrastination, overly inward focus, and general inefficiency.
Our recommendation is to seek people with a necessarily diverse set of technical and organisation skills, and who have the aptitude and mindset that ‘fits’ the chosen approach. Fit is determined, and subsequently maintained, through clarity and communication; of purpose, approach, available levels of authority or influence, explicit goals and expectations, resources and support, and the organisation’s provision for data management.
5. Goals and expectations are misaligned
Data governance should have explicit (or even tacitly understood) goals, but these may differ markedly from the expectations of the general business and even those that governance reports to.
Misalignment may manifest itself in various ways. Data governance operates over a long-term, but the business may want immediate solutions. Business, IT and data governance may be working towards different outcomes, and the team may even have different expectations to those they report to. Differing understanding of agreed goals may arise through obfuscation, overly brief presentations, and lack of a shared vocabulary. Misalignment may be unconscious, driven by a lack of understanding, poor communication, or use of vague terminology. Misalignment may even be conscious or deliberate; with stakeholders unwilling or unable to tacitly articulate the real problem they wish data governance to address.
Misalignment of goals and expectations will have a detrimental effect on data governance and its relationship with the organisation. Typically this can lead to conflict, reduced engagement, demoralised and ineffective teams, poor choices in expensive tools, and loss of credibility.
Managing expectations begins with education, especially the rationale for governance and the difference between data governance and data management. Education should be accompanied by clear goals, the value these bring to the organisation, and a visible delivery plan. Alignment is helped by pragmatic goals that are derived from substantial and pressing problems; well-defined to include scope, scale, resources, collaboration required, stakeholders, escalation and SMART criteria; specific, measurable, assignable, realistic and timely (Doran, 1981). Naturally these goals and deliverables will drift with organisational pressures and evolving requirements. In itself not a bad thing, drift can be managed by having a long-term, enterprise-wide view on improvement, a clear and supported vision, sufficient support, and a mature attitude to improvement.
This study has found that data governance is not well-understood, is not practiced as described in theory, frequently resembles data management, is seldom established with the right authority or focus, and its goals are too often misaligned with the expectations of its stakeholders. There is an over-emphasis on reactive, operational remedies to short-term data pain; and excessive reliance on a junior or mid-level manager’s personality to drive change.
We argue that these characteristics can lead to data governance being hard to establish, failing to meet expectations and objectives, delivering little real value or long-term improvement, suffering from staff disillusionment and conflict, never properly overcoming organisational resistance, and ultimately floundering.
We argue that adopting or reinvigorating data governance should begin by understanding what is to be achieved and how. Data governance is best seen as a strategic enterprise-wide activity, whilst data management is the more suitable option where the expectation is for localised data improvements. Any informed choices will consider appropriate goals, the differences between data governance and data management, expected activities and scope, appropriate placement in the organisation, granting of sufficient authority and/or influence, and provision of resources that are proportionate and appropriate to the goals and activities. Decisions need to be communicated and embedded, such as by affording access, requiring progress reporting to senior leadership, and encouraging governance participation in strategic forums like Data Steering Groups. Changes to existing functions may require an overhaul of the teams or even a wider organisation culture shift. We have found that such change need not be glacial or expensive, and can happen at pace if suitably motivated and supported such as by the imposition of data protection regulations.
Begg, C., and Caira, T. (2012) “Exploring the SME Quandary: data governance in practise in the small to medium-sized enterprise sector”, Electronic Journal Information Systems Evaluation, volume 15, pages 3–13
Benfeldt, O., Stouby, J., Madsen, P. and S. (2020) “Data Governance as a Collective Action Problem”, Information Systems Frontiers, volume 22, pages 299–313
DMBOK (2017), DAMA Data Management Body of Knowledge, 2nd edition, DAMA International, New Jersey
Doran, G. T. (1981). “There’s a S.M.A.R.T. way to write management’s goals and objectives”, Management Review, volume 70, issue 11, pp. 35–36
Goetz, M. (2015) “Data Governance and Data Management Are Not Interchangeable”, Forrester Research, blog available at https://go.forrester.com/blogs/15-09-11-data_governance_and_data_management_are_not_interchangeable/
Lewin, K. (1943), “Psychology and the process of group living”, Journal of Social Psychology, volume17, pages 113– 131
McGregor, D. (1960) The Human Side of Enterprise, McGraw-Hill, New York
Micheli, M., Ponti, M. Craglia, M. and Suman, A. (2020) “Emerging models of data governance in the age of datafication”, Big Data and Society, September, available at https://journals.sagepub.com/doi/full/10.1177/2053951720948087
Nemer, J. (2019) “What are microservices?” Cloud Academy, 13 November, available online: https://cloudacademy.com/blog/microservices-architecture-challenge-advantage-drawback/
Otto, B. (2011) “Data Governance”, Business & Information Systems Engineering, volume 3, issue 4, pages 241–244
Plotkin, D. (2020) Data Stewardship: An Actionable Guide, 2nd edition, Academic Press, London
Ting Si Xue, C. (2016) “Benefits and challenges of the adoption of cloud computing in business”, International Journal on Cloud Computing, volume 6, pages 5–7
Ullmann, S. and Saunders, D. (2021) “Google Translate is sexist. What it needs is a little gender-sensitivity training”, Scroll.In, 5 April, available online: https://scroll.in/article/991275/google-translate-is-sexist-and-it-needs-a-little-gender-sensitivity-training
We acknowledge that this paper is limited in its bias, accuracy of observations, and ability to generalise its findings. Large organisations are complex and fluid, with many variables that make it impossible to confidently identify causal relationships. Insights and interpretations are based on experience, which may be biased and lack full access to the people or information behind key decisions. Extensive use is made of the DAMA DMBOK, but this framework is quite generic and represents just one view of data governance. Our evidence is largely derived from the UK retail sector; a highly competitive environment where short-term results are paramount, investment decisions are frequently short-term and reactive, and the “just do it” culture can be at times be both challenging and enabling.