Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

This essay is not an endorsement of any political party or statement. UKEssays.com does not accept payment of any kind for the publishing of political content, it has been published for educational purposes only.

Community Safety Initiatives | Evaluation

Paper Type: Free Essay Subject: Social Policy
Wordcount: 3316 words Published: 14th Aug 2018

Reference this

INTRODUCTION

Purpose of this paper is to discuss the main problems confronting those who must evaluate community safety initiatives. In order to do this, the paper first provides an overview of the problem. This is followed by an analysis of support and initiative by governments, technical difficulties, access to data, political pressure, and utilisation.

COMMUNITY SAFETY EVALUATION

The initial challenge facing every community safety initiative is to meet crime reduction targets whilst also implementing preventative measures to ensure long-term reductions in crime and disorder. Arguably, high quality evaluation can play a role in this as it can help better understand what works and how it works (Morton 2006). According to AG (2007), evaluation is concerned with making value-based judgments about a program. Mallock and Braithwaite (2005:4) define evaluation as “the systematic examination of a policy, program or project aimed at assessing its merit, value, worth, relevance or contribution”. Any evidence of the benefits and impact of initiatives will help to influence local partners in commissioning decisions. However, according to Morton (2006), some evaluators have been more able to undertake evaluations than others. As Read and Tilley (2000) claim, evaluation stage continues to be a major weakness of a community safety program.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

Proper evaluations of community safety initiatives are rare (Community Safety Centre 2000). According to Rhodes (2007), a range of policies and programs has been established with the aim of achieving greater community participation and involvement leading to increased community capacity. However there has been little evaluation of this approach or the specific programs. Read and Tilley (2000) also claim that there is relatively little systematic evaluation and a shortage of good evaluations. Moreover, what is available is generally weak.

According to AG (2007), the reasons for the lack of evaluation of community safety programs have not been studied extensively, but social, political and financial considerations are likely to have a strong influence. Evaluation studies consume resources, and therefore are competing for the limited resources available and must be justified by the value of the information which they provide. There are also several other relevant factors including the limited knowledge and experience of evaluation theory and practice of many program managers and organisers. In addition, evaluation evidence is often seen as ‘bad news’ since program objectives tend to be over-optimistic and hence are rarely fully met; a situation that evaluation might expose.

LACK OF SUPPORT AND INITIATIVE

According to Community Safety Centre (2000), little time and resources are available for conducting evaluation. When evaluation does occur, the size does matter. It can depend on how large the partnership is as to the resources that they have available for evaluation (Cherney and Sutton 2004). Often in small partnerships no money is put aside for evaluation. Since majority of serious evaluations are going to be expensive, this can particularly be a problem for small projects where a good evaluation may take up a relatively large proportion of the project budget. Thus, very often people will argue that this is an unnecessary cost. Furthermore, practitioners very often feel that they can themselves quiet easily tell whether or not something has been a success. Community Safety Centre (2000) concludes that recommendations that something works, by people who were involved in implementing the initiative, are often based on relatively weak evaluation evidence commonly relying on more general impressions that are usually not objective enough.

In Australia, for example, neither central nor regional government has so far encouraged evaluators to undertake their own evaluation (Cherney and Sutton 2004). Community Safety Centre (2000) and Morton (2006) also claim that there is a lack of commitment from central government and local agencies, arguing that the problem lies in attracting and maintaining involvement of people and agencies that really are not interested in crime prevention or community safety. According to Morton (2006), evaluators have only been required to produce quarterly reports with milestones for the future and not to undertake a real reflection on a project, including writing a review on the project and analysing available data. All evaluators have to do is monitor whether money is being spent on outputs. Read and Tilley (2000) argue that there is little attention paid to how initiatives may have had their effects. There is not enough investment or requirement for evaluation.

According to Varone, Jacob and De Winter (2005), policy evaluation is an underdeveloped tool of Belgian public governance. They claim that it is partitocracy, weakness of Parliament vis-à-vis the government, and the federalisation process that is characteristic of the recent institutional evolution of the country, that jeopardise the development of a mature evaluation culture.

TECHNICAL DIFFUCULTIES

Evaluators might find barriers at each of the evaluation steps, including problem formulation, design of instruments, research deign, data collection, data analysis, findings and conclusions and utilisation (Hagan 2000). In respect to problem formulation, evaluation researchers are often in a hurry to get on with the task without thoroughly grounding the evaluation in the major theoretical issues in the field. Glaser and Zeigler (1974) claim that much of what is regarded as in-house evaluations has been co opted and is little more than head counting or the production of tables for annual reports. Further problem is the absence of standardised definitions. The confusion over definitions has not only impeded communication among researchers and, more importantly, between researchers and practitioners, but also has hindered comparisons and replications of research studies. Furthermore, although evaluators would prefer control over treatment and a classic experimental design, with random assignment of cases to experimental and control groups, this seldom happens. In many instances it is very difficult to find organisations that would be willing to undergo experimentation, particularly if it involves the denial of certain treatments (control group) to some clients. The program planners and staff may resists randomisation as means of allocations treatments, arguing for assignment based on need or merit. The design may not be correctly carried out, resulting in nonequivalent experimental and control groups. The design may break down as some people refuse to participate or drop out of different treatment groups (experimental mortality). Some feel that randomised designs create focused inequality because some groups receive treatment others desire and thus can cause reactions that could be confused with treatments. Much of the bemoaning concerning the inadequacy of research design in evaluation methodology has arisen because of an over-commitment to experimental designs, and a deficient appreciation of the utility of post hoc controls by means of multivariety statistical techniques. It may be that more rapid progress can be made in the evolution of preventive programs if research designs are based on statistical rather than experimental model. One major difficulty in evaluation research is in procuring adequate control groups. In respect to data collection, one principal shortcoming of much evaluation research has been its over reliance on questionnaires as the primary means of data gathering. Program supporters will jump on methodological or procedural problems in any evaluation that comes to a negative conclusion.

Hagan (2000) also lists other obstacles to evaluation, including unsound and poorly done data analysis, unethical evaluations, naive and unprepared evaluation staff, and poor relationships between evaluation and program staff.

Community Safety Centre (2000) argues that, unlike experimental researchers, evaluators often have difficulty comparing their experimental groups with a control group. Although evaluators might attempt to find a similar group to compare with, it is usually impossible to apply the ideal experimental rigor of randomly allocating individuals to an experimental condition and a control condition.

According to AG (2007), those responsible for commissioning or conducting evaluation studies also need to take account of the local social, cultural and political context if the evaluations are to produce evidence that is not only useful, but used.

According to Morton (2006), some evaluators have stressed their incompetence, claming that they do not know how to undertake evaluation. Schuller (2004) has referred to the lack of accuracy in their predictions, partly due to a lack of post-auditing information. She further argues that evaluators apply a narrow scope that stresses well-established knowledge of local impacts, whilst underplaying wider geographical, systematic, or time factors.

Evaluation research can be a complex and difficult task (Community Safety Centre 2000). Evaluators are often described by a lack of control over, and even knowledge of, wide range of factors which may or may not impact on the performance indicators. While evaluating a single crime prevention initiative may be difficult enough, evaluating a full community safety project may be many times more complicated. The intervention package often impacts beyond the target area and this impact needs to be anticipated. As an additional complication, evaluation research can itself have an impact on the outcome of an initiative. A secondary role of the audit process is to raise awareness and build support for the initiative in the affected community.

ACCESS TO DATA

A commonly reported problem with evaluation has been access to relevant data (Morton 2006). Morton (2006) claims that it is often hard to get good baseline data against which to evaluate a project, mainly because procedures and resources for appropriate multi-agency data collection and mapping are not in place. Often the relevant data is not recorded or collated across services and analysed together to give a complete picture of the problem. Furthermore, partnerships often lack appropriate analytical skills to use quantitative data (Morton 2006).

According to Hagan (2000), if proper data for evaluation are absent and clear outcomes or criteria of organisational success are absent, then a proper evaluation cannot be undertaken. The success of the entire evaluation process hinges on the motivation of the administrator and organisation in calling for an evaluation in the first place. It should be possible to locate specific organisational objectives that are measurable. The key assumptions of the program must be stated in a form which can be tested objectively. However, this often does not happen in practice.

POLITICAL PRESSURE

Political pressure can present another problem for evaluators. Administrators often want to spend all the funding available on implementation as opposed to evaluation (Morton 2006). Thus, being aware of the political context of a program is a precondition for useable evaluation research (AG 2007). Evaluation research requires the active support and cooperation of the agency or program to be evaluated (Hagan 2000). However, the program administrator’s desire to reaffirm his or her position with favorable program evaluations may conflict with the evaluator’s desire to acquire an objective appraisal of a program’s impact. The end result may be either a research design with low scientific credibility and tainted results, or a credible study that never receives a public hearing because the administrator does not like the results. According to Read and Tilley (2000), few evaluations are independent and evidence is used selectively. There is undue satisfaction with reduction as an indicator that the initiative was effective without attention to alternative explanations, or to possible side-effects. They further argue that 84% of evaluations they studied were conducted by the initiative coordinator or staff, and only 9% were by an independent external evaluator. Thus, it is challenging for partnerships to persuade for funding to be put aside for evaluation. Evaluator’s job is also affected by balancing the need to be strategic and pressure to produce “runs on the board” by local authorities and central agencies, as well as the greater value placed on “projects” compared to “planning” within local authorities (Cherney and Sutton 2004). According to Hagan (2000), even the best laid evaluation plans can “bite the dust” in the “high noon” of political reality. In discussing the politicisation of evaluation research, Hagan (2000) points out the incasing political nature of evaluations as they are increasingly used to decide the future of programs. According to him, part of the administrator’s concern about evaluation research comes from the dilemma that research creates for him. The evaluation process casts him in contradictory roles. On the one hand, he is the key person in the agency, and the success of its various operations, including evaluation, depends on his knowledge and involvement. On the other hand, evaluation carries the potentiality of discrediting an administratively sponsored program or of undermining a position the administrator has taken.

MURPHY’S LAW

Hagan (2000) applies Murphy’s Law to evaluation research, clearly indicated barriers that evaluator faces. In relation to evaluation design:

  • the resources needed to complete the evaluation will exceed the original projection by a factor of two.
  • after an evaluation has been completed and is believed to control for all relevant variables, others will be discovered and rival hypothesis will multiply geometrically
  • the necessity of making a major decision change increases as the evaluation project nears completion.

In relation to evaluation management:

  • the probability of a breakdown in cooperation between the evaluation project and an operational agency is directly proportional to the trouble it can cause.
  • if staying on schedule is dependent on a number of activities which may be completed before or after an allotted time interval, the total time needed will accumulate in the direction of becoming further and further behind schedule.

In relation to data collection:

  • the availability of data element is inversely proportional to the need for that element
  • historical baseline data will be recorded in units or by criteria other than present or future records
  • none of the available self-report formats will work as well as you expect

In relation to data analysis and interpretation:

  • in a mathematical calculation, any error that can creep in, will. It will accumulate in the direction that will do the most damage to the results of the calculation.
  • the figure that is most obviously correct will be the source of error
  • if an analysis matrix requires “n” data elements to make the analysis easy and logical, there will always be “n-1” available.
  • When tabulating data, the line totals and the column totals should up to the grand total; they won’t

In relation to presentation of evaluation findings:

  • the more extensive and thorough the evaluation the less likely the findings will be used by decision makers.

UTILISATION

Evaluator is often approaching his or her job knowing that evaluation results are often not appropriately utilised. This might significantly impact his or her performance. Hagan (2000) claims that evaluations have not been effectively utilised, and that much of this waste is due to passive bias and censorship within the field itself, which prevent the publication of weaker, less scientific findings, and to misplace client loyalty. Cherney and Sutton (2004) argue that there has been a lack of status and authority within the overall structure of local government to facilitate change in polices and practices. Furthermore, there are agencies and units both within local authorities and externally who are unwilling to be held accountable for community safety outcomes. According to Schuller (2004), there has been inadequate organisation, scheduling and institutional integration into the overall decision-making process, with impact assessment often undertaken towards the end. It has also been suggested that the most pertinent issue may be, not to predict accurately, but to define appropriate goals, and then set up the organisation that can effectively adapt and audit the project to achieve goals.

CONCLUSION

The paper has discussed the main problems confronting those who must evaluate community safety initiatives, looking at the issues of support and initiative, technical difficulties, access to data, political pressure, and low utilisation. Proper evaluations of community safety initiatives are rare. Little time and resources is available for conducting evaluation and there is a lack of commitment from government and local agencies. Barriers have been experienced throughout the evaluation process, including problem formulation, design of instruments, research deign, data collection, data analysis, findings and conclusions and utilisation. Further barriers have been presented by lack of focus on the local social, cultural and political context. Some evaluators have even stressed their incompetence, claming that they do not know how to undertake evaluation. Relevant data is often not recorded or collated to give a complete picture of the problem. Political pressure also presents a significant problem as administrators find themselves in contradictory roles. Furthermore, they often want to spend all the funding available on implementation as opposed to evaluation. Finally, evaluation results have not been effectively utilised, which can have a significant negative impact on evaluators.

BIBLIOGRAPHY

Australian Government Attorney Generals Department (AG). (2007). “Conceptual Foundations of Evaluation Models”.

Cherney, A and Sutton, A. (2004). Aussie Experience: local government community safety officers and capacity building”. Community Safety Journal, Vol.3, Iss.3, pg.31.

Community Safety Centre (2000). “Research and Evaluation”. Community Safety research and Evaluation Bulletin”. No.1.

Glaser, D. and Zeigler, M.S. (1974). “The Use of the Death Penalty v. the Outrage at Murder”. Crime and Delinquency, pp.333-338.

Hagan, F.E. (2000). “Research Methods in Criminal Justice and Criminology (eds)”. Allyn and Bacon.

Mallock, N.A. and Braithwaite, J. (2005). “Evaluation of the Safety Improvement Program in New South Wales: study no.9”. University of New South Wales.

Morton, S. (2006). “Community Safety in Practice – the importance of evaluation”. Community Safety Journal, Vol.5, Iss.1, pg.12.

Read, T and Tilley, N. (2000). “Not Rocket Science? Problem-solving and crime reduction”. Crime Reduction Research Series Paper 6, Home Office.

Rhodes, A. (2007). “Evaluation of Community Safety Policies and Programs”. RMIT University.

Schuller, N. (2004). “Urban Growth and Community Safety: developing the impact assessment approach”. Community Safety Journal, Vol.3, Iss.4, pg.4.

Varone, F., Jacob, S., De Winter, L. (2005). “Polity, Politics and Policy Evaluation in Belgium”. Evaluation, Vol. 11, No. 3, pp.253-273.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: