Continuing Professional Research

This page outlines some of my continued research and past professional/educational projects. These samples are meant to present a sample of my knowledge and professional experiences.

Sample 1: Action Research and Reflective Practice Toward Learning

Action research is practical research that happens during practice (Stringer, 2007). It happens as professionals interact with cognitive learners in real life settings using an evidence based practice, then analyze their processes for needed changes (Norcross, 2008). Action based research is related to reflective practice (Norcross, 2008). Reflective practice is the mental process of a professional that occurs during action based research. An example of action research as a process might begin with a professional trying a research based practice, in a real world setting. During this process, the practice seems to miss the mark, the professional wonders to themselves what they did wrong. It is this wondering that leads to reflection. They think about what they could try differently, they engage with other professionals who have tried similar practices, and incorporate data from past research with what they have learned from their first attempt; and then make adjustments to the research model (Norcross, 2008). After they have done a great deal of reflection, research and networking, they tweak their practice. It is at this point that they have extended the action research to that of reflective practice. They attempt the research based practice again in a real world setting, often a classroom, individual therapy session, or group treatment (Jacobson, Foxx, and Mulick, 2005). This time, when they try the practice, it goes somewhat better. They are able to illicit the response they intended. They track the adjustments to their practice, collect data, and once more being the process of reflection toward action based research that will further improve their work (Norcross, 2008). They are now practical research scientists, engaging in action based and reflective practice in real life settings (Laureate, 2010).

Having worked as a mental health coordinator I had to give many presentations to pre-school teachers. One of the main presentations I gave during 2010 after receiving training, was reflective practice and action based research. Teachers and educators often use action based research in classrooms, because much of the science of education is based upon theory and ideology that although supports promising practices qualitatively, is not conclusively measured quantitatively (Jacobson, Foxx, and Mulick, 2005). Counselors do this as well when engaging in therapy (Norcross, 2008). Much the same as many education models, psychopathology is often based upon qualitative or mixed models of research (Jacobson, Foxx, and Mulick, 2005). Many of these theories are effective, but it is difficult to support effectiveness objectively with scientific theory due to the lack of empirical data in natural settings (Norcross, 2008). Research that lacks empirical data may need to rely upon action based research to support practical qualitative outcomes (Norcross, 2008). Also, education and counseling are both professions that are applied in real world settings. Applied sciences are strengthen by all types of research (qualitative, quantitative, and mixed methods) through reflective practice and action based research (Jacobson, Foxx, and Mulick, 2005). Improving and strengthening research through “trying and doing,” benefits society, professions, and communities (Stringer, 2007).

Personally I think that action based research is a reasonable and practical use of science. Having worked in classrooms and counseling agency’s, I know that research practices written in textbooks and implemented in controlled settings, are not always practical in the real world (Israel and Ilvento, 1995). Another advantage to action based research, it allows teachers and therapists the ability to track outcomes both quantitatively and qualitatively. For example, in education settings practices are evaluated for instruction, curriculum, and assessment, prior to being embraced by administration. Once embraced, they are implemented in classrooms. Standardized tests collect large amounts of quantitative statistical data, used to track teaching practices and student progress. If measurable data is tracked from teacher practices (teacher feedback, student progress, and/or changes in behavior) are not productive, elicit the intended response, or favorable testing outcomes, then the administration searches for more effective practices (Jacobson, Foxx, and Mulick, 2005). Without professionals trained in reflective practice, qualitative data involving creative open-ended outcomes, useful for measuring ingenuity and creativity, might be done away with due to lack of evidence to support their value (Gardener, 1993). Qualitative measures for creative practices are needed to support individualized learning and multiple intelligence, often not measured in standardized scales and tests (Gardener, 1993). Reflective practice and action based research increases the applicability of evidence based research and promising practices, by allowing counselors and teachers to become scientists within real world settings (Stringer, 2007).

There are some draw backs to action based research. Few are skilled in action research, although it sounds simple, it is a complicated process toward improving practice. It requires a great deal of introspection in a continuous loop of acting, observing, and reflecting; then beginning the loop again, until the process stimulates the needed response (Stringer, 2007). It requires a high level of critical thinking. Although similar to the scientific method, where scientists control the setting with dependent variables and manipulate the independent variables in order to test a hypothesis (allpsych.com, 2012). It differs from the scientific method in that during action based research, the controls are often thrown out, in an effort to arrive at an intended outcome, not a hypothesized specific response reflecting one correct answer (Norcross, 2008). The drawback here, reliability is affected. In this the practice may be influenced by multiple variables that contribute toward its success or failure on any given day (Jacobson, Foxx, and Mulick, 2005). When this occurs, the outcomes to action based research can be unpredictable. It may not be the practice that is flawed, but the delivery of the practice that renders it ineffective (Jacobson, Foxx, and Mulick, 2005). Without the use of reflective practice and quality action based research skills, perfectly effective practices may be done away prematurely (Jacobson, Foxx, and Mulick, 2005). This is a problem in real world settings where professionals are not highly qualified. It is the use of reflective practice and action based research that separates the theorists from the analysts. For this reason training opportunities and highly qualified professionals implementing reflective practice are critical to improved outcomes (http://education.state.mn.us, 2012).

When I consider the reason for action based research and reflective practice, I consider differences between evidence based practices and best practices (Norcross, 2008). Although they differ greatly, it seems both have value. The use of reflective practice promotes do-no-harm to children, families, and society as a whole. When reflective practices and action based research are used, they improve the delivery of services and track progress (Stringer, 2007). If data is measured using quantitative research alone as in the scientific theory, creativity and ingenuity are lost in this process. Skilled professionals with abilities to think-outside-of-the-box lose innovative abilities in exchange for analytic thinking, in order to answer questions with a single correct answer. Innovative thinking is needed to progress the good of society and solve global challenges such as energy crisis, environmental stress, poverty, and world hunger (imanagronomist.net). In this way differing cognitive learning styles and reflective teaching strategies lend toward the good of society and the stewardship of humanity.

Resurces

Article: Israel, G. D., & Ilvento, T. W. (1995, April). Everybody wins: Involving youth in community needs assessment. Journal of Extension, 33(2). Retrieved from
http://www.joe.org/joe/1995april/a1.php

Gardner, H. (1993). Multiple intelligences: the theory in practice. New York, NY. Basicbooks. p 69-71.

Norcross, J., Hogan, T., & Koocher, G. (2008). Clinician’s guide to evidence-based practices: Mental health and the addictions. New York, NY: Oxford Press. p 260-270

Laureate Education, Inc. (Producer). (2010). Research and program evaluation. Baltimore, MD: Author.

Stringer, E. T. (2007). Action research (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc. p 5-6, 8-9.

Jacobson, Foxx, and Mulick. (2005). Controversial therapies for developmental disabilities: fad, fashion, and science in professional practice. Mahwah, New Jersey. Lawrence Erlbaum Associates Publishers. p 39, 63, 66.

Retrieved on January 17 from http://www.imanagronomist.net/.

Retrieved on January 17 from AllPsych Online Research Methods—Chapter 1: Introduction to research. http://allpsych.com/researchmethods/introduction.html.

Retrieved on January 14th from http://education.state.mn.us/mde/index.html

Sample 2: Definition and Measures of Self-Efficacy

Operationalized Definition of Self-Efficacy-
The concept of self efficacy is derived from the notion of self-esteem. Self-esteem expands from an individuals self-concept, to their self-efficacy. This happens through a process building an individuals belief in their abilities through task completion.

According to O’Sullivan, and Strauser self-efficacy was operationalized by Bandura as, “an individual’s conviction that he or she can successfully execute the behaviors required to produce the desired outcome (Bandura, 1977),” (O’Sullivan and Strauser, 2009). Bandura makes an important point, including internal mechanisms of self-efficacy such as conviction. However, other overt contributions are needed in order to measure self-efficacy, such as task completion (Sheperis, Young, & Daniels, 2010). O’Sullivan and Strauser indicated that self-efficacy draws on both an individual’s self-perception and behavioral components (O’Sullivan and Strauser, 2009). Self-efficacy is then defined as, a reflection of abilities based upon previous task completion experiences. Reflection is then an internal process explaining both learning and behavior change (Schunk & Zimmerman, 1998). This is a definition that defies the traditional definition of self-efficacy, thus is an emerging theory (Sheperis, Young, & Daniels, 2010).

Defined in this way self-efficacy is not simply a belief in abilities, but a system of mental processes continuously altering confidence perceptions (O’Sullivan and Strauser, 2009). This mental process, in theory, would begin with self-efficacy perceptions based upon prior experiences, impacted by the completion of a self-efficacy performance task, followed by reflection of self-efficacy based upon performance, ending with altered or unchanged beliefs about self-efficacy (Illustration A). In completing a study using the question, “What are the differences in perceptions of self-efficacy among counseling students in online and land-based counseling graduate degree programs (sylvan.live.ecollege.com, 2011)?” Defining self-efficacy in this way, considering this question, would require measures of qualitative and quantitative research (O’Sullivan and Strauser, 2009). This definition challenges the current definition of self-efficacy, as a belief in the ability to succeed (Bandura, 1977). It asks the old age question, what comes first the chicken or the egg?

In considering task completion as a measure of self-efficacy, zone-of-proximal ability needs to be considered in rehabilitating counseling (Schunk & Zimmerman, 1998). Zone-of proximal development (ZPD), coined by Lev Vygotsky as tasks maintained within an individual’s capabilities to complete them, without instruction (Kozulin, Gindis, Ageyev and Miller, 2003). Considering ZPD, clients redefine their self-efficacy when challenged to complete tasks within their zone of proximal development (Kozulin, Gindis, Ageyev and Miller, 2003). In this, clients are assigned tasks, neither too difficult nor too easy, to complete (Kozulin, Gindis, Ageyev and Miller, 2003). Performance testing is one method for concluding an individual’s ZPD (Kozulin, Gindis, Ageyev and Miller, 2003). Measuring self-efficacy through task completion would require an individual to complete a task, within their zone of proximal development, and then for observers to complete a task analysis. This would indicate an individual’s level of self-efficacy using the task analysis data in conjunction with survey data or a lyric scale (simplypsychology.org, 2013). These measures would indicate a perceived level of self-efficacy and a task completed in conjunction with the survey, would strengthen this perception.

In measuring self-efficacy, the participant is asked to complete a chain of behaviors that are neither too challenging nor too easy, toward a desired event (Kozulin, Gindis, Ageyev and Miller, 2003). If the survey and task data indicates participants have both confidence and are able to produce results, this definition would be operationalized by the client indicating, “I am confident I can complete a task within my abilities successfully.” The order of these qualitative and quantitative measures would be important, measuring both as allies (Sheperis, Young, & Daniels, 2010). While belief and conviction are important attributes, as Bandura suggests, measures of behavior outcomes and abilities, are also important (Norcross, 2008). In asking for the completion of a task, researchers are able to measure self-efficacy in an objective manner. Clients, who know they will be tested on their self-efficacy, both qualitatively and quantitatively, followed by a task analysis phase, may be more likely to give honest answers in the survey and some may have increased motivation to successfully complete the task (O’Sullivan and Strauser, 2009).

Survey of Self-Efficacy Replaced with Lyric Scale – As first projected, a survey could be developed to measure and monitor progress (simplypsychology.org, 2013). However, the use of a lyric scale would offer a quick quantitative analysis, decreasing the need for additional resources in staff (simplypsychology.org, 2013). Both of these options have the ability to statistically measure the client’s self-efficacy and monitor progress (simplypsychology.org, 2013). These measures would also give the researcher knowledge on the difficulty of the task, ensuring the task falls into the client’s zone-of-proximal ability, as the client progresses (Kozulin, Gindis, Ageyev and Miller, 2003). In this considering ZPD, researchers need to think of development as varying across a lifespan, not contingent on intelligence alone (Kozulin, Gindis, Ageyev and Miller, 2003). What a client may be capable of completing at one developmental stage, they may be unable to complete at another developmental stage (Norcross, 2008). For this reason the measure of the client’s self-perception of their self-efficacy, is an important consideration that needs to be measured and monitored (Norcross, 2008).

Rational – The old adage goes, “Success breeds Success (Schunk & Zimmerman, 1998).” Human beings met with success build self-confidence which supports their self-efficacy (Schunk & Zimmerman, 1998). Completing a study incorporating observational data measures with survey results, would encourage professional practices, of teaching self efficacy considering ZPD (Kozulin, Gindis, Ageyev and Miller, 2003). This is important to rehabilitation therapists whose job may be to increase self-efficacy prior to setting goals (O’Sullivan and Strauser, 2009). In this instructors would teach within the students’ abilities, increasing their motivation and confidence levels, resulting in increased self-efficacy (O’Sullivan and Strauser, 2009). Theoretically incorporating these components would increase student progress and outcomes (Bandura, 1977).

Data Analysis – Within the survey I used questions that are domain specific to college students in completing a task (O’Sullivan and Strauser, 2009). These indicate the level or confidence an individual has in completing a task. Also, when compared to the task-analysis data, it indicates confidence intervals (Norcross, Norcross, 2008). I used wording that was domain specific to both online and land-based college students, but not so specific the question would not be relevant to both sample groups (Sheperis et al, Sheperis, Young, & Daniels, 2010). In this I used some generalizations across global self-efficacy confidence levels (O’Sullivan and Strauser, 2009). This means that I used questions that could be generalized to almost any academic population sample. In choosing a random sample, this would be important (Norcross, 2008). Another inclusion was situations that would require self-efficacy with a loss of some control (O’Sullivan and Strauser, 2009). In this I ask about ingenuity and environmental factors (O’Sullivan and Strauser, 2009). Clients that indicate an awareness of loss of control, while also being able to maintain success, indicate confidence in their abilities to succeed (O’Sullivan and Strauser, 2009). The success or failure of the task would influence efficacy expectations, thorough outcomes and experiences (O’Sullivan and Strauser, 2009). For this reason is using mixed methods, it is important to consider the order of events, first survey and then task analysis, to ensure data most relevant is collected first (Sheperis, Young, & Daniels, 2010).

Discussion – When I conceived of the idea to use observation in the form of task analysis and survey results comparing these two student groups, I did so to increase the validity of the survey data. Survey data is subjective by nature due to measuring internal mechanisms such as thoughts and feelings. I was bothered by the lack of objectivity in obtaining a survey alone to measure self-efficacy qualitatively (Sheperis, Young, & Daniels, 2010). This was bothersome largely due to the subjective nature of qualitative data measures in survey results, as well as the subjective nature of measuring internal mechanisms of thoughts and feelings (Sheperis, Young, & Daniels, 2010). Adding the observational data (task analysis), incorporated the component of validity in the survey results and objectivity in measurement (Sheperis, Young, & Daniels, 2010).

To begin I would first select pools of participants randomly, one group of online students, one group of land-based students, with one control group (Benson et al, Benson et al, 2005). The participants of each pool would be chosen after completing an individual student performance test. I would then randomly select from performance test takers, sample groups based upon their performance (ZPD), race, and gender. A power analysis would be used to anticipate a population size supporting sound statistical data (Norcross, 2008). Performance data would be important to measure for placement in groups considering ZPD, for the task completion measure. Students would be assigned high, medium or low levels of performance for ZPD task selection. This data would not be analyzed, used to ensure students self-efficacy on the task completion is measured utilizing tasks within the student’s ZPD. I would choose mixed methods triangulation design, though I am not sure these two dependent variables are equal (Sheperis, Young, & Daniels, 2010). In this I would compare groups considering traditional pretest and posttest designs, noting the changes after task completion and reflection. This data would be collected using multiple observers to decrease observer bias. I would then analyze the qualitative data incorporating qualitative results gathered from surveys and lyric scales(Sheperis, Young, & Daniels, 2010).

Another important consideration in choosing pool participants, might be the individual’s cognitive learning style and/or individual learning strengths and preferences (Gardener, 1993). These would extend the individual’s ZPD to other areas of unique strengths and individual attributes. Another consideration that may impact task completion and increased levels of self-efficacy is levels of motivations. For these reason it may be optimal to consider a task of personal preference or personally motivating to the individual to control for this effect.

Challenges – The challenge is in supporting my definition of self-efficacy while conducting comparison studies between these two groups. As my definition is an emerging theory, based more upon cognitive behavior models of learning, than psychotherapy, I may have difficulty in its acceptance by a review board (Sheperis, Young, & Daniels, 2010). Another challenge is clearly defining the performance task within the clients ZPD (Kozulin, Gindis, Ageyev and Miller, 2003). Clearly, the study is measuring self-efficacy of clients based upon my definition of self-efficacy, to that of previous defined notions (Bandura, 1977). New theories are carefully analyzed, especially when extending upon previously reputable works (Bandura, 1977). A final challenge I see is in the analysis of the data using mixed methods (Sheperis, Young, & Daniels, 2010). It would be important; in order to support the definition here, that pre and post test measures are recorded (Benson et al, 2005). It would be even more beneficial to this study to record multiple pre and post test recording measures to strengthen validity and reliability (Benson et al, 2005). Due to the use of two dependent variables, this would be very time-consuming considering the number of participants needed to conclude sound statistical data (Norcross, 2008). Another challenge, the design due to the lack of research supporting my definition, it is unclear if the two dependent measures are equal (Sheperis, Young, & Daniels, 2010).

Resources

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215.

Benson, Johnson, Taylor, Treat, Shinkareva and Duncan. (Benson et al, 2005). Achievement in college online and campus based career and technical education courses. Community College Journal of Research and Practice, 29: 369–394, Benson et al, 2005 Copyright # Taylor & Francis Inc. ISSN: 1066-8926 print/1521-0413 online DOI: 10.1080/10668920590921589.

Gardner, H. (1993). Multiple intelligence’s: The theory in practice. New York, NY. Basicbooks. p 69-71.

Kozulin, Gindis, Ageyev and Miller (Sep 15, Kozulin, Gindis, Ageyev and Miller, 2003). Vygotsky’s educational theory in cultural context. New York, NY. Cambridge University Press. p 90.

Norcross, J., Hogan, T., & Koocher, G. (Norcross, 2008). Clinician’s guide to evidence-based practices: Mental health and the addictions. New York, NY: Oxford Press. p 98 -295, 260.

O’Sullivan and Strauser. (O’Sullivan and Strauser, 2009). Operationalizing self-efficacy, related social cognitive variables, and moderating effects implications for rehabilitation research and practice. Rehabilitation Counseling Bulletin. Volume 52 Number 4 July O’Sullivan and Strauser, 2009 251-258. Hammill Institute on Disabilities 10.1177/0034355208329356 http://rcb.sagepub.com hosted at http://online.sagepub.com.

Retrieved on October 10th, sylvan.live.ecollege.com, 2011 from http://sylvan.live.ecollege.com/ec/crs/default.learn?CourseID=5655434&Survey=1&47=6498005&ClientNodeID=984642&coursenav=1&bhcp=1.

Retrieved on Feb. 10th, simplypsychology.org, 2013 from http://www.simplypsychology.org/likert-scale.html.

Sheperis, C. J., Young, J. S., & Daniels, M. H. (Sheperis, Young, & Daniels, 2010). Counseling research: Quantitative, qualitative, and mixed methods (Kindle Ed). Upper Saddle River, NJ: Pearson Education, Inc.

Schunk, D. and Zimmerman, B. (Schunk & Zimmerman, 1998). Self-Regulated Learning: From Teaching to Self-Reflective Practice. New York, NY. Guilford Press. p 3-46.

Sample 3: Research Findings Submitted During Practicum

In reference to your question about bonding and attachment and the amount of custody/visitation time needed to promote this; what I found was most studies indicate that there is a need for, “continuous and regular visits with non-custodial parents/caregivers,” to promote attachment. Weekly and overnight visits of non-custodial parents were suggested. However, other variables were discussed such as amount of conflict between caregivers, the child’s exposure to trauma, the mental and emotional status of the caregiver, responsiveness of the caregiver, the reciprocal interactions between the child and caregiver, and opportunities to bond with siblings were mentioned. Almost all studies I read indicated that after age 3 attachment abilities decline significantly, although still possible, every year the child ages after 3 years decreases their ability to build trusting relationships throughout their life. Overall continuous and frequent visitations were suggested to promote bonding. In these studies, no specific time or measure was indicated. Most indicated that the younger the child was, the more frequent their need for contact to promote attachment. I compare this to object permanence, relative to relationships. Children begin to understand that objects and people don’t disappear when they are unseen. They are able to hold the image of a caregiver in their mind and begin to understand concepts of time and trust in the parent returning. They hold a basic core developmental view for the most part. I am attaching a couple of research studies I felt supported this summary.

Sample 4: Intelligence Assessment Wechsler Intelligence Scale for Children

Intelligent assessments are used in order to gain insight about the cognitive processes of an individual. The Wechsler Intelligence Scale for Children – Fourth Edition Integrated (WISC-IV Integrated) measures these abilities in children, giving parents and professionals information about the strengths and deficits of a specific child, compared to that of a norm pool. This information can be useful in academic planning, accommodations, and supports (Wechsler, 2004).

The Wechsler Intelligence Scale for Children Fourth Edition Integrated (WISC-IV Integrated) consists of four cognitive testing domains. These include verbal, perceptual, working memory and processing speed domains (Wechsler, 2004). Each of these domains includes index scales which contribute toward a full-scale intelligence quotients (I.Q.). The core sub-tests measure the abilities of an individual in each domain. The verbal comprehension is measured using three core sub-tests; similarities, vocabulary and comprehension (Wechsler, 2004). The perceptual reasoning scale uses three core sub-tests as well in the areas of block design, picture concepts, and matrix reasoning. In the area of working memory there are only two sub-tests; digit span and letter-number sequencing core sub-tests (Wechsler, 2004). Last, processing speed is measured using the core sub-tests coding and symbol search. There are also supplemental sub-tests, and although these may be used to derive additional information, they may not be used to substitute any of the core sub-tests in any one domain (Wechsler, 2004). They are considered to be process sub-tests, meaning that they give information about the cognitive processes of the specific individual being assessed.

An examiner with training in assessment and statistics is needed in order to ensure the reliability and validity of the full-scale I.Q. The examiner presents the sub-scales to the child separately as they proceed through the assessment (Wechsler, 2004). This is a time-consuming assessment and the entire scale can take from 65-80 minutes depending upon the cognitive processes of the child (pearsonassessment.com, 2011). It is important for the examiner to be familiar and have training in this scale. The core sub-test block design is made up a block pattern. During assessment the child has to copy this design using red and white colored blocks, recreating the design within a specific period of time (Wechsler, 2004). Examples of core sub-tests to be familiar with; similarity and digit span. The sub-test similarity is presented to the child with two words that represent common items and concepts. The child is directed to compare these, describing how these items are similar and different. Within the digit span sub-test the child is directed to repeat back numbers that the examiner first verbalizes to the child (Wechsler, 2004). The number of digits increases as the child proceeds through the sub-scale. The child is also directed to repeat the digits back to the examiner in reverse within the digit span backwards portion of the sub-test (Wechsler, 2004).

In being familiar with the assessment the examiner must understand definitions explained in operational terms. One term to be familiar with is “full scale.” Full scale I.Q. is the score that is derived based upon all of the core sub-tests. This score is a combination of the performance I.Q. and the verbal I.Q (.http://www.minddisorders.com, 2011). The “verbal I.Q.” is derived from the sub-tests; information, digit span, vocabulary, arithmetic, comprehension, similarities, and letter-number sequencing (Wechsler, 2004). On the other hand the “performance I.Q” is derived from a collection of picture completion, picture arrangement, block design, object assembly, digit symbol, matrix reasoning, and symbol search (http://www.minddisorders.com, 2011). Another term often used in this assessment is “split score,” this means that there is a split in the raw scores between domains or it could refer to a split in the full scores between verbal and performance I.Q. (http://www.minddisorders.com, 2011).

Norm samples of this assessment are consists of 2200 children ages 6 years through 16 years 11 months. These children were of general population including diverse representatives; of gender, age, socioeconomic status, region, and ethnicity (pearsonassessment.com, 2011). Two hundred children were separated into each sub-test for assessment. This sample was taken within the United States using westernized children (Wechsler, 2004).

The reliability of this assessment has been cause for concern over time. In the earlier versions of this assessment the reliability was established by the interconnection between the sub-tests (Irwin, 1966). This continues to be true in the updated versions but due to the differences and similarities in between sub-tests in correlations studies, this is not the best measure of reliability (Irwin, 1966). The best reliability is found in the test/retest of a specific individual. As most individual’s I.Q. score (using this scale) changes little over the course of their lifetime, this scale is deemed reliable. Validity has been established in much the same way in doing longitudinal studies on individuals to show consistency (pearsonassessment.com, 2011).

The three IQ scores are standardized with a mean score of 100 and a standard deviation of 15. Standardized means the sub-test questions have specific answers that are measurable and quantitative (Wechsler, 2004). The questions within this test are not open-ended and there is a standard answer for each sub-test question. This is important due to the need to measure the reliability and validity of the assessment. If the questions are open ended, there is not a specific answer and therefore it is impossible to ensure the test measures what it is meant to measure and that it is also reliable in giving a score that gives a true interpretation of that person’s abilities consistently (Whiston, 2009). The test includes a margin for error of 95 out of a norm of 100 (Wechsler, 2004). This means that a child who scores a full-scale score of 102, when accounting for marginal error, there is a 95 out of 100 chance that this child’s score falls between a full scale I.Q. score of 97 – 107. Children whose sub-tests are moderately scattered have a more even profile of abilities to that of children whose scores are split further between sub-scales (Wechsler, 2004).

The scoring portion of this assessment is one of the draw backs to using it. The scoring is time-consuming based upon the fact that all of the raw scores have to be obtained for each sub-test. Then the examiner must convert the total raw scores to scaled scores or base rates (Wechsler, 2004). This is done using the conversion tables included in the testing by aligning the child’s chronological age to that of the correct domain and sub-scale. For instance the block design sub-test is a timed scale. Thus the child is assessed taking into consideration the amount of time it took to complete the design (using a stopwatch), yes or no if the design was correct, the number of trials the child used (Wechsler, 2004). All of these components are elements included within the score for each question. Another drawback is that many of the questions are not culturally relevant to other cultures outside of the United States. For instance, in the vocabulary portion of the assessment the child is asked to interpret word meanings. Most of the words used are relevant to the context of United States or Westernized culture, making this test not valid for a child from Africa or other cultural regions of the world (Wechsler, 2004). One limitation not mentioned in any of these sources, this assessment limits the definition of intelligence to that of the four domains included within. Not considered here are the other forms of intelligence often not measured or considered in children such as resilience, motivation, and specific skills; considered giftedness in area of humanities, music and art.

Overall the Wechsler Intelligence Scale for Children, when used with other components as part of a full-scale assessment, is valid and reliable in measuring individual intellectual abilities commonly valued in children.

References

Gardner, H. (1993). Multiple intelligences: the theory in practice. New York, NY. Basicbooks. p 69-71.

Irwin, D. (1996) Reliability of the wechsler intelligence scale for children. National Council on Measurement in Education. Journal of Educational Measurement. vol. 3, No. 4 (Winter 1996). pp. 287-292.

Retrieved on August 7th from http://www.pearsonassessments.com/HAIWEB/Cultures/en-us/Productdetail.htm?Pid=015-8979-893\

Retrieved on August 7th from http://www.minddisorders.com/Py-Z/Wechsler-adult-intelligence-scale.html

Whiston, S. C. (2009). Principles and applications of assessment in counseling (3rd ed.). Belmont, CA. Brooks/Cole, Cengage Learning. p 67-70

Wechsler, D. (2004). WISC-IV integrated administrative scoring manual. San Antonio, Texas. Harcourt Assessment Inc. p 2-3, 4, 5, 6, 7, 29-32

Samples of PowerPoint Presentations

My Social Change

Auditory Processing Disorder

Visual Strategies

Reflective Practices

HIPAA

COR

Respectful Parenting

These projects are the original work of this author –

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s