A comparison of most to least prompting, no-no prompting, and responsive prompt delay procedures

Discrete trial training is a commonly used to teach children with autism spectrum disorder and related intellectual disabilities. A number of prompting and error correction strategies can be implemented when using discrete trial training. These strategies need to be effective and efficient. We compared a novel procedure, responsive prompt delay, to most to least prompting and no ‐ no prompting. A parallel treatments design, nestled in a modified multiple probe design (Horner & Baer, 1978), was used to compare the three procedures with three participants. The responsive prompt delay procedure was at least as effective as the most to least prompting and no ‐ no prompting procedures for three participants; the time required for each participant to master the skills was variable across procedures.


| INTRODUCTION
Discrete trial training (DTT) is commonly used as part of comprehensive educational programmes to teach young children with autism spectrum disorder (ASD) and related intellectual disabilities.DTT has been demonstrated to be effective in teaching social skills, language skills and academic skills (Fentress & Lerman, 2012;Kodak, Campbell, et al., 2016;Leaf et al., 2016;Soluaga et al., 2008).The core components of a discrete trial are the discriminative stimulus, the learner's response and the teacher delivered consequence.These remain consistent across DTT procedures.However, different approaches to addressing errors can be adopted, including errorless learning or error correction procedures (Leaf et al., 2020).
Errorless procedures aim to eliminate or reduce the occurrence of errors during teaching by providing a controlling prompt with the natural discriminative stimulus (i.e., 0-s response interval) on every trial (Leaf et al., 2014a;2014b).A controlling prompt is the least assistive prompt that results in 100% accuracy when teaching novel skills.The next step is to transfer stimulus control to the natural discriminative stimulus; there are several ways to achieve prompt fading, each with advantages and disadvantages.
Most to least prompting is a type of errorless procedures that systematically fades prompts for each individual skill using a most to least criteria, following a set criterion of correct responses (Gast, 2011).Prompt fading requires ongoing monitoring of each response and the corresponding data; teachers must constantly use the data to make treatment decisions.If prompts are not faded systematically and in a timely manner prompt dependency may occur (Grow & LeBlanc, 2013;Leaf et al., 2014a;2014b;MacDuff et al., 2001); that is, the natural discriminative stimulus will not evoke the correct response (Green, 2001).
In error correction procedures, a delay follows the discriminative stimulus to allow the learner to respond independently.A prompt, or error correction is only implemented if the learner emits an incorrect response or fails to respond.Independent correct responses are differentially reinforced; differentially reinforcing independent correct responses may result in an efficient transfer of stimulus control (Grow & LeBlanc, 2013).The error correction is a contingency that is applied to errors or non-responses; it corrects and reduces errors, and increases correct responding (Carroll, Joachim, Peter, & Robinson, 2015;Townley-Cochran, Leaf, Leaf, Taubman & McEachin, 2017).Examples of error correction include, vocal or corrective feedback (e.g., "no"; Leaf et al., 2016) or modeling the correct response (e.g., the therapist says, "it's a car"; Leaf et al., 2020), and remedial trials.Remedial trials provide the learner with additional opportunities to respond correctly in the presence of the discriminative stimulus following an error (Worsdell et al., 2005).
Many errorless learning and error correction procedures are effective; effectiveness varies across learners and may be impacted by teaching variables, such as the type of skill being taught (Carroll, Joachim, St. Peer, & Robinson, 2015;Leaf et al., 2014a;2014b;Turan et al., 2012).Most to least prompting has been demonstrated to be an effective tool for teaching skills such as receptive identification, matching and imitation (Fentress & Lerman, 2012), receptive labeling (Leaf et al., 2014a) and tacting (Leaf et al., 2014b).Error correction procedures, including no-no prompting, have also demonstrated to be effective in teaching a range of skills (Leaf et al., 2010;Leaf et al., 2014aLeaf et al., , 2014b;;Leaf et al., 2016a;Smith et al., 2006).Research has not found that one strategy is consistently more effective than another.A recent randomized clinical trial found most to least prompting and error correction to be almost equally effective and efficient when teaching intraverbal skills to 28 children with an ASD diagnosis (Leaf et al., 2020).
When no one procedure is consistently more effective than another, practitioners may choose from the procedures based on the needs of individual clients or they may focus on service needs.When considering the needs of individual clients, practitioners may conduct assessments to identify the most effective prompt fading procedure and/or the most effective prompt type, for example, gesture or positional for each learner.Research suggests that outcomes of prompt assessments will vary across participants.Schnell et al., (2020) found least to most prompting to be more efficient than most to least prompting for the three participants in their study; while, Cengher et al. (2015) found the opposite for the three participants in their study.Therefore, if appropriate it may be best practice to conduct individualized prompt assessments for each individual learner.
Teaching based on the principles of ABA can be conducted in a variety of settings.In many cases, EIBI is provided by a team of fully trained practitioners who are skilled in running prompt assessments and making evidence based decisions.While conducting prompt assessments may be feasible in such settings, it may be too time consuming or cumbersome in others, for example maintained or public schools.Teaching staff in maintained special educational needs (SEN) schools often do not have formal training in ABA; therefore, it may not be appropriate for them to conduct prompt assessments.As such, a procedure that is effective and efficient for the majority of learners, and that can be adopted for a minority, may be necessary.Behavior analysts may need to consider both the supporting environment and needs of the learners in such settings.Staff expertise, training and other resources should influence the decision of which procedure to use, as these may impact on treatment fidelity (Fryling et al., 2012), which may be compromised in settings where staff are not formally trained in ABA (Fryling et al.).Kodak et al., (2018) found that school staff make multiple errors when implementing DTT; and such errors can impact learning outcomes (Carroll et al., 2013;DiGennaro Reed et al., 2011).This highlights the need for more training in these settings.However, it may also support the idea of choosing procedures that are suited to these types of setting as well as the needs of the learner.
In increasingly complex settings, it may be important to use a procedure that can implemented by all staff and that is effective with the majority of children.It may be useful to use a procedure that is prescriptive; that is, where staff do not have to make moment-to-moment decisions about the type of prompt to use, or when to deliver a prompt.The development prompt dependency is an important consideration, particularly in busy clinical settings where prompt fading may be compromised.If prompts are not systematically faded prompt dependency can occur.
Prompt fading requires careful on going monitoring of the data, which may be difficult in a setting where many staff work with many pupils across the week.Therefore, a procedure that does not require prompt fading may be optimal.Foran et al. (2015) evaluated a comprehensive educational model that was underpinned by the principles of applied behavior analysis (ABA).This model was implemented in a generic SEN school for children aged 4-7 years old.Teaching staff were not formally trained in ABA, and ratio of teaching staff to children was not one-to-one.
Each member of the classroom staff assisted in the delivery of an individualized DTT programme with up to eight children across the week.In this setting, the behavior analyst needed to consider the supporting environment when developing programmes for children across the school.Commonly implemented procedures often needed to be modified slightly.One example of this was the prompting procedure.Because staff were not formally trained in ABA and the ratio for staff to children was not one-to-one, it was not appropriate for staff to make ongoing data based decisions, such as when to fade prompts during DTT sessions.Instead, a DTT procedure that was prescriptive and remained consistent across learners and skills was needed.The assumption being that this would reduce the likelihood of treatment errors, which has been shown to impact child outcomes.The behavior analyst in this setting designed a prompting procedure that met these requirements; it is known as the responsive prompt delay procedure.
The protocol for the responsive prompt delay procedure is as follows: the teacher delivered the discriminative stimulus and provided an opportunity for the learner to respond independently; independent correct responses were differentially reinforced.If a learner did not respond after 5-s the discriminative stimulus was presented again.If the learner did not respond again, a prompt (least to most) was provided.If the learner responded incorrectly at any time, an error correction using the most assistive prompt and instructional feedback was delivered.Two remedial trials followed prompted and incorrect responses.Implementing least to most prompting provided the learner with the opportunity to respond with the least assistive prompt; a more assistive prompt was only provided if the learner did not respond.The learner did not have the opportunity to make continuous errors on a trial because the most assistive prompt was delivered immediately following any error.It is consistent across learners, skills, and settings.Furthermore, prompt fading or the identification of a controlling prompt is not required with this procedure.This study compared the responsive prompt delay procedure to two well documented prompting strategies: most to least prompting and no-no-prompting.Three children took part in the study.It was hypothesized that the responsive prompt delay procedure would be at least as effective as the other two procedures.
A parallel treatment design (PTD; Gast & Wolery, 1988) nestled in a modified multiple probe design was used to compare the three procedures.The PTD was devised by Gast and Wolery (1988) to compare instructional practices, such as the ones compared in this study.The design allowed the three procedures to be compared simultaneously using concurrently operating multiple probe designs for each procedure (Gast & Wolery).Experimental control was demonstrated by showing that changes in the dependent variables occurred only after the independent variable was introduced.The modified multiple probe element of the design allowed for this to be demonstrated across skills for each participant in a time lagged fashion.

| Participants
Participants were recruited from primary school classrooms in a maintained special needs school.All participants received an education based on the principles of ABA, which included discrete trial teaching.
Thomas was 7 years old and had a diagnosis of ASD.Thomas communicated using short sentences to mand and tact items; he did not initiate conversations, but could respond to basic intraverbals.Thomas engaged in low level problem behaviors, for example, knocking items over.Olivia was 8 years old and had a diagnosis of social communication disorder.Olivia could mand and tact using 2-3 word sentences, and initiated and engaged in conversation with adults and peers.Olivia did not engage in any problem behavior at the time of the study.Mark was 6 years old and had a diagnosis of ASD.Mark could also mand and tact using 2-3 word sentences and engaged in basic conversation when initiated by adults.Mark sometimes engaged in protest vocalizations.All three participants had generalized imitation and matching skills and could follow two to three word instructions from adults.

| Setting
Sessions took place in a corridor close to the participants' classroom.The participant sat opposite the researcher at a table.Students, who were completing their MSc in ABA, implemented the majority of teaching sessions; the first author implemented the remaining sessions and supervised the students.
The implementation of the independent variables was counterbalanced across instructors, settings and time of day.One average, two teaching sessions and two daily probes were conducted each day; when a full probe was conducted this was usually conducted across one day.On average teaching took place three days per week.

| Preference assessment
Interviews were used to identify preferred items for the first two participants prior to intervention.The researchers interviewed teaching staff and behavior analysts; and eight preferred items were identified; these items were depicted on a choice board.A multiple stimulus without replacement (MSWO;DeLeon & Iwata, 1996) preference assessment was conducted with the third participant.Teaching staff suggested up to 15 items that were presented in the MSWO.The items were placed in front of the child and after the child interacted with an item it was removed from the array; the array was then reorganized by moving each item to the right.This was repeated four times and the eight items that were chosen most frequently during assessments were depicted on a choice board.Choice boards were presented to the participants before teaching and probe sessions; the item that was chosen was delivered at the end of that session.

| Skills taught
Participants were taught a range of skills (see Table 1).The types of skills varied across children.The skill sets for Thomas and Olivia were tacting country names, auditory visual conditional discrimination (i.e.receptive identification) of items named in Welsh, and matching digital clocks to analog clocks.The skill sets for Mark were tacting animal names, matching Welsh labels to pictures of items and, auditory visual discrimination of items named in Welsh.The skills were not in the participants' repertoire and were not the target of any educational intervention prior to or during the study.The skills for each child were randomly assigned to one of three conditions using a random number generator.The researcher selected targets for each skill set that seemed to be equal in terms of difficulty; these were then counterbalanced across teaching conditions in order to control for minor variations in the level of difficulty (Wolery et al., 2018).The field size for all teaching and probe sessions was an array of three.

| Controlling prompt assessment
A controlling prompt assessment was used to identify the controlling prompt; the controlling prompt was the least intrusive prompt that evoked correct responding with 100% accuracy on novel skills.Each participant's level of accuracy was assessed across four different prompt levels (positional prompt, model prompt, gestural prompt, T A B L E 1 Skills taught using simultaneous prompting, no-no prompting and responsive prompt delay and full physical prompt) for unknown skills.During the assessment, three index cards with words written in French, Spanish or Welsh on were placed in front of the participant.The instructor presented the discriminative stimulus and provided a prompt simultaneously.Neutral feedback was provided for any selection.Each prompt level was tested four times; the least intrusive prompt was tested first and increasingly intrusive prompts were tested until the controlling prompt was identified.Thomas' controlling prompt was positional for matching clocks and receptive identification of items named in Welsh.Olivia's controlling prompt was partial physical and for matching clocks and receptive identification of items named in Welsh.The controlling prompt for tacting countries was a full echoic prompt for Olivia and Thomas; Marks controlling prompt was a model prompt for all three skill sets.From this, a prompt hierarchy was developed; there were two to three prompts in each prompt hierarchy; it started with the controlling prompt and descended to less intrusive prompts.The controlling prompt was used during the first teaching session for new skills taught using the simultaneous prompting procedure.The intrusiveness of the prompt decreased or increased following a specified number of correct or incorrect responses (see simultaneous prompting).

| Response measurement
The primary dependent variable was the percentage of correct responses during daily probe and full probe conditions.Correct responding was defined as the participant emitting the correct response within 5-s of the discriminative stimulus being presented.A skill was considered mastered when performance on a set of stimuli reached 91.6% (11 correct responses out of a total of 12) across three consecutive daily probe sessions.Once a participant met mastery criterion for a stimulus set, teaching on that set stopped and daily probes were no longer carried out for that skill set.
The second dependent variable was the total number of teaching trials and teaching sessions required to master skill sets across the three conditions.Average session duration is also reported for each condition: a teaching session started when the first discriminative stimulus was delivered and finished when the learner responded to the final discriminative stimulus.

| Schedule of reinforcement
Teaching sessions: Praise was delivered on a fixed ratio (FR) 1 contingent on correct responding during all teaching sessions.Praise had previously been conditioned as a reinforcer for all three participants.A token economy system was used; each token board had 6 tokens.All three participants have a history of responding with token delivery during their DTT training sessions.Tokens were delivered on an FR 3 contingent on correct responding (i.e., prompted correct responses during the most to least prompting condition, and independent correct responses during the no-no prompting and responsive prompt delay conditions) during teaching sessions.
This schedule was used because it is similar to the schedule that is employed for token delivery during their typical DTT sessions.Access to backup reinforcers was delivered after the sixth token was delivered; this signaled the end of the work session.
Full probe and daily probe sessions: Reinforcement or corrective feedback were not delivered contingent on correct responses during the full probe and daily probe sessions.Instead, praise was delivered on a variable interval of 30-s (VI30) for compliance (sitting appropriately, having hands on knees or table).Access to tangibles was delivered after every 12 trials in the daily probe and after 18 trials in the full probe sessions.

| Full probe sessions
Full probes sessions were conducted throughout the study.Full probes included all of the skill sets; that is, mastered skill sets, skill sets currently in acquisition and untaught skill sets.Three full probe sessions were conducted prior to teaching to determine baseline levels of performance.One full probe session was conducted following mastery of any skill set.This was used to ensure that performance on that skill set and previously mastered sets had maintained (remained at or above 91.6%percent) and to ensure that performance on untaught skill sets had not improved before teaching had started.Each skill was probed four times during each full probe session.All targets were interspersed during full probe sessions, that is, they were not grouped in their skills sets.
The field size for all probe sessions was an array of three stimuli.During the full probe condition, the teacher presented the discriminative stimulus and allowed the participant 5-s to respond.Prompting and error correction procedures were not implemented during full probes sessions.

| Daily probes sessions
Daily probes (DP) were carried out before each teaching session to assess the acquisition of skill sets currently being taught; DP were used as a dependent measure to evaluate the effectiveness of each of teaching method (Gast & Wolery, 1988).Each skill was probed four times during DP sessions; therefore, each daily probe consisted of 12 probe trials.DP were conducted in the same manner as FP.

| Prompting conditions
Most to Least Prompting.In the most to least prompting condition, the discriminative stimulus and a controlling prompt were provided simultaneously.Praise, tokens, and access to tangibles were provided for correct responding as appropriate (see schedule of reinforcement).If the participant responded incorrectly, the stimuli were removed and the next trial was presented.The controlling prompt was used when introducing new stimuli, and on subsequent sessions the prompt that was used in the previous teaching session was repeated.The intrusiveness of the prompt was decreased following three consecutive correct prompted responses; the intrusiveness of the prompt increased following an incorrect response (Leaf et al., 2014a(Leaf et al., , 2014b)).
Responsive prompt delay procedure.In the responsive prompt delay condition, the teacher presented the discriminative stimulus and allowed 5-s for the participant to respond.If the participant responded correctly the researcher provided praise and access to tangibles as above, and all stimuli were removed from the table 1.If the child did not respond stimuli were removed momentarily and feedback or reinforcement were not provided.In a subsequent trial, the same instruction and stimuli were used.If the child did not respond least to most prompting implemented: the instruction was presented again, the least intrusive prompt was provided initially and if necessary, increasingly intrusive prompts were provided until the child responds correctly.The instruction was not repeated as the instusiveness of the prompt increased (see Table 2 for examples of each prompt level for motor and vocal responses).The first remedial trial was then presented: stimuli were positioned in same way; the instruction was presented and the child was given 5-s to respond.During a second remedial trial the stimuli were rotated and the instruction was changed (if possible).A new trial was then introduced.
If the participant responded incorrectly at any stage, a 3-step error correction was implemented.First, the instruction was presented again and the most intrusive prompt was used.The stimulus related to the correct response was then isolated (held up) and a suitable model prompt was provided, for example, "it's a bear".This was followed by two remedial trials as above.When a remedial trial resulted in extra teaching trials during the responsive prompt delay procedure, additional trials were added to the subsequent teaching sessions for the most FORAN-CONN ET AL.
to least and no-no prompting conditions to ensure that an equal number of individual trials were provided across conditions.
No-no-prompt.The no-no prompt procedure was similar to that used by Leaf, Sheldon and Sherman (2010).
The teacher presented the discriminative stimulus and allowed 5-s for the participant to respond.The consequence for correct responses was identical to the responsive prompt delay procedure.If the child did not respond or responded incorrectly the researcher said 'no' in a neutral voice and removed the stimuli, the same trial was then repeated.If the participant responded incorrectly or did not respond a second time the researcher said, 'no' again and removed the stimuli.Finally, the trial was repeated and a controlling prompt was provided simultaneously.

| Additional procedures
Correct responding did not increase on the matching clocks stimulus sets after a number of sessions for Olivia and Thomas.Therefore, an additional prompt was introduced, whereby participants were asked to tact each digital clock prior to matching it to the analog clock.

| Interobserver agreement and treatment fidelity
The instructor recorded participant responding during full probes, daily probes and teaching sessions; an independent observer simultaneously calculated participant responding during 29.1% of full probes, 26.7% of daily probes.Interobserver agreement (IOA) was collected by comparing observers' data on a trial by trial basis; an agreement was defined as the two observers recording the same outcome on a corresponding trial (i.e., either a correct or incorrect response during full and daily probes).The number of agreements was totaled and divided by the number of agreements plus disagreements and converted to a percentage to calculate interobserver agreement (IOA).The percentage interobserver agreement was 98.8% for full probes (range, 95.5% to 98.6%), and 100% for daily probes across participants.
To assess treatment fidelity, teaching sessions were recorded and analyzed.The procedure for each of the prompting conditions was outlined in a treatment fidelity checklist.An observer reviewed teaching sessions using these checklists; teacher behavior was scored as correct or incorrect for each of the steps outlined.Treatment fidelity was scored for 22.7% most to least prompting sessions (range, 17.7% to 27%), 25.4% of no-no prompt (range, 20% to 33%) and 22.3% of responsive prompt delay sessions (range, 12.1% to 36%).The number of correctly implemented steps was totaled and divided by the number of correctly T A B L E 2 Least to most prompting system for the prompt delay procedure

Positional Moving card closer to participant Partial Physical
From the participant's elbow, gently guiding their hand towards the correct card

Full Physical
Taking the participants had and placing it on the correct card

Skills requiring a vocal response
First phoneme "G" First syllable "Gre" Full word "Grenada" The current study found that responsive prompt delay procedure was at least as effective as most to least prompting and no-no prompting procedures.Responsive prompt delay was more effective for Thomas; however, the average session duration was longer for this procedure, making it less efficient.The no-no prompting procedure resulted in the slowest acquisition of skills for Thomas.Olivia mastered the skill sets taught using no-no prompting in fewer sessions and less instructional time than the other procedures.Mark mastered the skill sets taught using no-no prompting and responsive prompt delay procedures in the same number of sessions, but the average session duration was slightly longer for responsive prompt delay procedure.The differences between the three procedures were small for each participant; such differences may negligible when teaching a range of skills as part of a comprehensive educational programme.When all procedures are equally effective when run with high fidelity, behavior analysts may wish to consider the suitability of procedures to the supporting environment.
The outcome of this research is consistent with other research which found the that the effectiveness of prompting procedures is idiosyncratic across learners (Leaf et al., 2020;Carroll, et al., 2015;Leaf et al., 2014a;2014b;Turan et al., 2012).In a recent randomized clinical trial, Leaf et al. (2020) compared the effectives of most to least prompting and error correction procedure that used error statements when teaching intraverbals to 28 participants with ASD.Similar to the findings in this study, participants mastered all skills using both procedures and there was very little difference in the time needed to master skills across the conditions.However, Leaf et al. found that learners in the most to least prompting group scored significantly higher on posttreatment probes.
Similarly, in the current research, the procedures that used an error correction and differential reinforcement for independent correct responses were marginally more effective than most to least prompting across participants.
These results also support other published studies that found that procedures that utilized an error correction were slightly more effective than errorless or near errorless methods (e.g., Leaf et al., 2010;Fentress & Lerman, 2012;Leaf et al., 2016).Leaf et al., (2016) found an error correction procedure that utilized feedback and remedial trials to be more effective than most to least prompting.One major difference between the current study and Leaf et al. (2016) is that maintenance, assessed in the full probe sessions, was high for all three procedures, whereas, maintenance was variable across procedures in Leaf and colleagues' study.Leaf et al. (2016) suggested Percentage of correct responses for Mark during full probe (FP) and daily probe (DP) sessions for skill set 1 (tacting animals), skill set 2 (auditory visual discrimination of items names in Welsh) and skill set 3 (matching Welsh words to pictures) T A B L E 3 Efficiency data: the total number of sessions and trials to mastery, and the average duration of teaching sessions for the three prompting procedures that corrective feedback, may make reinforcement more differential; and that it may also serve as a mild punishment.It is possible that the corrective feedback, "no" in the no-no prompting procedure functioned as a positive punisher for Olivia's incorrect responses.Olivia mastered all three stimulus sets taught using this procedure in fewer sessions; and she immediately selected another stimulus when, "no" was delivered contingent on nonresponses or incorrect responses during teaching sessions.While problem behavior was not measured in this paper; we can anecdotally report that problem behavior was not higher during the no-no prompting or the responsive prompt delay procedures.Leaf et al. (2020) included measure of problem behavior and reported that problem behavior was not higher in the error correction procedure despite previous recommendations to avoid error correction for this reason.Therefore, Leaf et al., recommended that error correction should not be avoided for this reason.

Participant
The use of differential reinforcement during skill acquisition programmes has been recommended (Grow & LeBlanc, 2013).However, results from comparisons of differential and non-differential reinforcement procedures have again demonstrated that the effectiveness of these procedures may idiosyncratic across learners (Leaf et al., 2020;Boudreau et al., 2015;Fiske et al., 2014).Both the responsive prompt delay and no-no prompting were marginally more effective for the three participants in this study; independent correct responses were differentially reinforcement with both procedures.Given that both differential and non-differential procedure can be effective, a combination of differential reinforcement and least to most prompting may be optimal in a setting where systematic prompt fading may not be feasible.Failure to fade prompts systematically may result in prompt dependency for some learners (Grow & LeBlanc, 2013;Leaf et al., 2014a;2014b;MacDuff et al., 2001); this must be considered when choosing a prompting procedure.
The responsive prompt delay procedure was effective for all three participants; however, it was the least efficient for two of the three participants.The additional time needed to implement the procedure can be explained by the time allowed for the learner to respond independently, and by the use of remedial trials following prompted and incorrect responses.
Previous research has demonstrated that providing more information on correct responses (e.g., instructional feedback) and/or additional opportunities to respond in presence of the discriminative stimulus (e.g., remedial trials) following an error resulted in quicker acquisition of skills (Ardoin et al., 2009).The findings of this study do not overwhelmingly support these findings.During the error correction in the responsive prompt delay procedure the learner was provided with instructional feedback and two remedial trials following an incorrect response.
Conversely, in the no-no prompting condition, corrective feedback, "no", was delivered following an incorrect response, and learner was not provided with further opportunities to respond in presence of the discriminative stimulus.Despite this, the responsive prompt delay procedure was not consistently more efficient than the no-no prompting procedure.Future research could compare the responsive prompt delay procedure in the current study to a variation that implements one or no remedial trials following the error correction.
All of the procedures included in this study incorporate a number of different prompt types: full physical, partial physical, positional and gestural prompts; and the order in which they are delivered varied depending on the procedure.As individual prompt types are not evaluated separately, we cannot draw conclusions about which prompt type would be most effective for an individual participant or skill.Prompt delay procedures, for example, progressive prompt delay, on the other hand, allow for specific types of prompts to be evaluated (Markham et al., 2020).However, the aim of this study was to compare the responsive prompt delay procedure to procedures that were commonly used in the UK at the time of the study; therefore, for the purposes of this research evaluating prompting procedures that incorporated different types of prompt was not an issue.Perhaps more relevant is that the responsive prompt delay procedure incorporates least to most prompting, error correction, and remedial trials; and we do not know which of these, if any, was most effect in increasing skill acquisition.As reducing the number of components of this procedure would be potentially beneficial in some settings, future research should include a component analysis to identify the most effective components of this procedure.
Treatment integrity can impact on the effectiveness of behavior analytic programmes (Carroll et al., 2013;Grow et al., 2009).There is a correlation between treatment integrity and treatment effectiveness in DTT; reduced levels of treatment integrity during DTT implementation can decrease the effectiveness of the intervention (Arkoosh et al., 2007;Carroll et al., 2013;Di Gennaro Reed et al., 2011).As mentioned previously, treatment integrity for DTT may be lower when implemented by teaching staff (Carroll et al., (2013).Therefore, the behavior analyst must consider the complexity of a prompting procedure, and identify if it can be implemented with high treatment integrity by the staff in a particular setting.The procedure used in a particular setting must be suited to the staff skill set.
All three procedures were implemented by MSc level behavior analysts in this study and treatment integrity was high for all three procedures.The research was conducted by MSc level behavior analysts for a number of reasons.Firstly, because this is a busy special educational needs school, it would not have been possible for teaching staff to take the time necessary to run the teaching sessions for research purposes.Secondly, while it was possible to take the time to train MSc students to deliver all three procedures, it would not have been possible to do so with teaching staff.Finally, as this was the first evaluation of the responsive prompt delay procedure the researchers sought to conduct the research themselves.Future research should however be conducted with implementers who do not have formal training in ABA, e.g., those working in special educational needs settings.This research should include measures of treatment integrity to identify if there are differences across procedures when implementers do not have formal training in ABA.Future research should also incorporate a social validity measure, focusing on ease of implementation, to measure implementers preference for different procedures.
This study compared the procedures across a number of skills (i.e., tacting, matching and receptive identification).Each skill required a different response topography, and required participants to attend to different discriminative stimuli.All three procedures were effective in teaching all of the skills with the three participants.
There are a number of limitations to this study.The three participants had prior exposure to the responsive prompt delay procedure.Research has shown that proximal exposure to a prompting or error correction strategy may influence relative acquisition rates; thus, affecting the outcomes of comparisons to other procedures (Coon & Miguel, 2012;Kodak, et al., 2016).Kay et al. (2020) and Roncati, Souva, and Miguel (2019) found that proximal exposure to specific prompts resulted in a faster acquisition of intraverbal skills.Therefore, future research should investigate the efficacy of the responsive prompt delay procedure with learners who have not been exposed to it.
That said, the effect of the independent variable on dependent variable was almost immediate across all three conditions; thus, proximal exposure to the responsive prompt delay may not have impacted these participants responding.
Another limitation is that the three participants in this study had relatively advanced learner behavior; therefore, future research should evaluate the relative effects of the three prompting procedures with learners with beginner repertoires, such as learners who do not have generalized imitation or matching skills and those with less advanced vocal verbal behavior.
Finally, in order to ensure that the number of trials were equal across teaching sessions the researcher yoked remedial trials in the responsive prompt delay procedure and added an equal number of trials to the other two procedures when necessary.As a result, the number of trials in the most to least prompting and no-no prompting teaching sessions may have been artificially inflated.However, inclusion of trials to criteria as a secondary dependent variable helps to rectify this issue.
In summary, the results from this study support previous findings that many prompting and error correction procedures can be effective in teaching skills to learners with ASD and related developmental disorders.The responsive prompt delay procedure was as effective as most to least prompting and no-no prompting.This procedure incorporates some of the advantageous components of the other procedures and eliminates some of the limitations.Errors are minimized, but learners are provided with opportunity to respond independently, thus, reducing the likelihood of prompt dependency for some learners.
Corrective feedback, "no" is not incorporated, thus, positive punishment is not intentionally used.Instead, differential reinforcement, instructional feedback and further opportunities to responding in the presence of the discriminative stimulus are provided.The procedure may be easier to implement as there is a clear protocol for delivering prompts which remains consistent across trials; however, further research is needed to confirm this.Ease of implementation is an important consideration particularly when there are a large number of individuals who are not formally trained in ABA implementing procedures (Leaf et al., 2016).There may be higher chance that treatment fidelity will be compromised with procedures that are difficult to implement; therefore, behavior analysts need to be responsive not only to the needs of learners, but also to individual settings and to those implementing the procedures.

FORAN
E 1 Percentage of correct responses for Thomas during full probe (FP) and daily probe (DP) sessions for skill set 1 (tacting countries), skill set 2 (matching Welsh words) and skill set 3 (matching clocks) 10 -FORAN-CONN ET AL.F I G U R E 2 Percentage of correct responses for Olivia during full probe (FP) and daily probe (DP) sessions for skill set 1 (tacting countries), skill set 2 (matching clocks) and skill set 3 (matching Welsh words)