Overcoming Small Sample Barriers in Assistive Technology Acceptance Research: A Two-Phase Approach Using Correlation-Preserved Synthetic Data Generation to Test the Technology Acceptance Model

0
65

Abstract

Assistive technology holds transformative potential for individuals with disabilities, yet paradoxically, abandonment rates reach 30-50% within the first year. Understanding why people adopt or reject these critical technologies has been hindered by a fundamental research challenge: the populations who use assistive technology are too small and difficult to access for traditional large-scale studies. This creates a methodological impasse where the tools I need to study adoption require sample sizes I cannot obtain. This study breaks this impasse through methodological innovation. I collected detailed empirical data from assistive technology stakeholders in Seoul, South Korea, then used advanced statistical techniques to generate synthetic participants that preserve the psychological relationships observed in real data. This two-phase approach enables rigorous hypothesis testing while respecting the constraints of specialized populations. My findings challenge fundamental assumptions about technology adoption. The Technology Acceptance Model, validated extensively in consumer contexts, operates differently in assistive technology settings. Devices perceived as “too easy to use” were actually viewed negatively, implying users interpreted simplicity as limited capability or stigmatizing design rather than as a benefit. Social influence from healthcare providers, family members, and peers proved more powerful than the device’s perceived usefulness in driving adoption decisions. Most surprisingly, despite dramatic differences in technical confidence between professionals, users, and guardians, all groups showed similarly strong adoption intentions, suggesting multiple psychological pathways converge on the same outcome. These patterns suggest that assistive technology adoption follows fundamentally different logic than consumer technology. Rather than optimizing for simplicity and ease of use, successful AT implementation may require demonstrating sophisticated capability while providing robust support systems. The research also establishes that correlation-preserved synthetic data generation can enable statistically powered research in populations where traditional large-scale sampling proves impossible, offering a methodological template for disability and rehabilitation research more broadly.

Introduction

Background and Context

Assistive technology (AT) represents a critical bridge between disability and independence, encompassing any device, software, or equipment that helps individuals with disabilities perform functions that might otherwise be difficult or impossible. Despite the transformative potential of these technologies, global AT adoption remains paradoxically low, with abandonment rates ranging from 30% to 50% within the first year of acquisition1. This disconnect between technological availability and actual utilization presents a fundamental challenge to disability support systems worldwide. The World Health Organization estimates that over one billion people globally require assistive products, yet only 10% have access to them2, highlighting a critical gap in both provision and adoption.

The Technology Acceptance Model (TAM), originally developed by Davis3, has emerged as the dominant theoretical framework for understanding technology adoption across diverse contexts. TAM posits that technology adoption is primarily determined by two key perceptions: perceived usefulness (PU) and perceived ease of use (PEOU), which together influence behavioral intentions and subsequent actual use. Meta-analyses have demonstrated TAM’s robustness across consumer technologies, typically explaining 40-60% of variance in behavioral intention4. However, the application of TAM to assistive technology contexts remains understudied, with existing research suggesting that AT adoption may follow fundamentally different psychological pathways than consumer technology adoption5,6.

Problem Statement and Rationale

Assistive technology acceptance research faces a methodological crisis that threatens the validity and generalizability of findings. Unlike consumer technology studies that readily access thousands of participants through online panels, AT research confronts systematic barriers that make traditional large-scale data collection impractical or impossible7,8. These barriers manifest across multiple dimensions: limited population sizes, with specific disability types representing small percentages of the general population9, access barriers, as participants face mobility, communication, or cognitive challenges that complicate traditional survey methods10 and caregiver burden, with family members and healthcare providers overwhelmed by caregiving responsibilities, limiting research participation11.

The statistical implications of these constraints are severe. Traditional power analyses indicate that testing TAM with multiple predictors requires minimum sample sizes of 100-150 participants12,13, yet recent systematic reviews document that most AT studies achieve far smaller samples. Westland14 found that 80% of technology acceptance studies in top journals had insufficient sample sizes, averaging only 50% of the minimum needed for adequate statistical power. This widespread underpowering compromises the ability to detect true effects, validate theoretical models, or develop evidence-based interventions for AT provision.

The COVID-19 pandemic has further exacerbated these challenges, with multiple studies reporting substantial recruitment disruptions and attrition15,16. Contemporary AT research increasingly relies on pilot studies and exploratory designs rather than rigorous hypothesis testing17,18, creating a growing gap between theoretical development and empirical validation.

Significance and Purpose

This research addresses the endemic small sample problem in AT acceptance research through methodological innovation, demonstrating how synthetic data generation can enable statistically powered analyses while preserving the empirical grounding of real-world data. The significance of this contribution operates at three levels.

Methodologically, this study pioneers the application of correlation-preserved synthetic data generation to behavioral research, demonstrating that maintaining relationship structures between variables is as critical as matching distributional parameters19. This advances beyond simple parametric matching approaches that have shown limited success20, providing a template for rigorous synthetic data generation in fields where large-scale data collection faces structural barriers.

Theoretically, by achieving adequate statistical power through synthetic expansion, this research enables the first comprehensive test of TAM in assistive technology contexts with sufficient power to detect nuanced relationships and interaction effects. This addresses a critical gap identified by Nam et al.21 and extends TAM theory by examining whether its fundamental assumptions hold in contexts where adoption is driven by functional need rather than voluntary choice.

Practically, findings from this research can inform evidence-based strategies for AT implementation, potentially reducing the 30-50% abandonment rates that represent both economic waste and human suffering. Understanding the distinct psychological pathways through which different stakeholder groups (users, guardians, professionals) approach AT adoption enables targeted interventions that address the specific barriers and facilitators relevant to each group.

Objectives

This research pursues four interconnected objectives:

  1. To develop and validate a two-phase methodology that combines empirical pilot data collection with correlation-preserved synthetic data generation, demonstrating its application to assistive technology acceptance research where traditional large-scale sampling proves infeasible.
  2. To test whether the Technology Acceptance Model, validated extensively in consumer technology contexts, applies to assistive technology adoption and to identify any fundamental differences in the relationships between constructs when adoption is need-driven rather than voluntary.
  3. To examine how different stakeholder personas (AT users, guardians, and professionals) differ in their pathways to adoption decisions, testing whether uniform implementation strategies are appropriate or whether persona-specific approaches are needed.
  4. To establish methodological guidelines for using synthetic data generation in behavioral research, including validation frameworks, quality metrics, and best practices for preserving empirical grounding while achieving statistical power.

Scope and Limitations

This study focuses specifically on assistive technology acceptance in the South Korean context, with data collection conducted in Seoul during September-October 2024. The scope encompasses various forms of assistive technology but does not differentiate between specific device types or disability categories. The research examines adoption intentions rather than actual adoption behavior, acknowledging the potential intention-behavior gap documented in technology acceptance literature.

Key limitations include the small pilot sample (n=95) from which parameters were extracted, potentially affecting the stability of correlation estimates; the geographic and cultural specificity of the Korean sample, which may limit generalizability to Western contexts where individualistic values might alter social influence effects; and the cross-sectional design, which precludes causal inference and cannot capture the dynamic nature of technology acceptance over time. Additionally, while the Gaussian copula method preserves linear correlations (verified through Pearson-Spearman comparison showing differences <0.025), it may not fully capture non-linear relationships or higher-order interactions present in the real data7.The synthetic data generation approach, while addressing sample size constraints, cannot introduce variability not present in the pilot sample and assumes that relationships observed in the small sample generalize to the larger population. These limitations are explicitly acknowledged and addressed through sensitivity analyses and validation against theoretical expectations from literature.

Theoretical Framework

This research employs the Technology Acceptance Model (TAM) as its primary theoretical lens, extended with constructs particularly relevant to assistive technology contexts. Core TAM constructs of perceived usefulness and perceived ease of use are supplemented with variables identified through systematic literature review as critical to AT adoption: technical competence8, social influence5, technology anxiety9, trust and safety concerns10, and economic factors including price value and support conditions11,12.

The theoretical framework also incorporates insights from Innovation Diffusion Theory, particularly concepts of relative advantage, compatibility, and trialability13, recognizing that AT adoption may involve considerations beyond those captured in traditional TAM applications. This extended framework enables examination of whether consumer technology models adequately explain AT adoption or whether fundamental theoretical modifications are needed for contexts where technology addresses disability-related functional limitations.

Methodology Overview

This research employs an innovative two-phase sequential design14 that addresses the endemic small sample problem in AT research. Phase 1 consisted of empirical data collection from 95 participants (65 professionals, 13 AT users, 17 guardians) through online surveys, with the explicit purpose of extracting statistical parameters, correlation structures, and qualitative insights rather than hypothesis testing. Phase 2 employed Gaussian copula methods15,16 to generate a synthetic dataset of 500 participants that preserved the correlation structures observed in the real data while achieving the statistical power necessary for robust hypothesis testing.

This approach represents a methodological innovation that maintains empirical grounding through real-world data collection while overcoming sample size constraints through principled synthetic expansion. The correlation preservation ensures that relationships between variables, the theoretical heart of TAM, remain intact, enabling valid inference despite the synthetic nature of the expanded dataset. Validation procedures confirmed alignment with both empirical parameters from Phase 1 and theoretical expectations from literature, providing confidence in the synthetic data’s validity for hypothesis testing.

Methods

Theoretical Framework

Technology Acceptance Model (TAM)

This study employed the Technology Acceptance Model (TAM) as its primary theoretical framework. Originally developed by Davis3, TAM posits that technology adoption is primarily determined by two key perceptions: Perceived Usefulness (PU), defined as the degree to which an individual believes that using a particular system would enhance their job performance, and Perceived Ease of Use (PEOU), defined as the degree to which an individual believes that using a particular system would be free from effort. These core constructs influence users’ behavioral intentions, which subsequently predict actual system use4.

Variable Selection Based on Literature

The selection and operationalization of variables was grounded in both TAM theory and assistive technology literature. Table 1 presents the theoretical foundation for each construct included in my model.

Construct CategoryVariableSupporting LiteratureExpected DirectionRationale
Core TAMPerceived Usefulness


Perceived Ease of Use
Huang17; Venkatesh et al.18


Huang17, van der Heijden et
al10, Dawe1

Positive (+)




Positive (+)
Primary driver of intention in TAM


Reduces cognitive burden; 35% AT abandonment due to complexity
Extended TAMTechnical Self-Efficacy



Anxiety




Social Influence



Trust/Safety
Iancu & Iancu8, Lee & Tem-
pleton19


Iancu & Iancu8, Huang17


Huang17, Li et al.5


van der Heijden et al10
Positive (+)





Negative (-)




Mixed




Positive (+)
Confidence in ability predicts technology use





Technology fears reduce adoption, especially in elderly/disabled


Support
encourages use; stigma discourages (context-dependent)

Risk perceptions affect
Innovation DiffusionRelative Advantage

Compatibility

Trialability

Customizability

Spinelli et al.6

Dawe1, Spinelli et al.6

Huang17

Spinelli et al.6
Positive (+)


Positive (+)


Positive (+)


Positive (+)

Benefits over current solutions drive adoption


Fit with lifestyle critical for sustained use

Reduces uncertainty through experience

Personalization addresses identity concerns
Economic FactorsPrice Value

Support Conditions

Cruz & Emmel11, Huang17

Lee & Templeton19, de
Witte et al.2
Positive (+)


Positive (+)
Perceived financial control significant (β≈0.33)


Training and support critical for AT success
Table 1 | Variable-Literature Mapping and Expected Relationships

This literature-grounded approach ensured that my synthetic data generation parameters reflected both empirical observations from Phase 1 and theoretical expectations from prior research.

Research Design

The Challenge of AT Research

Assistive technology research faces unique and substantial barriers to large-scale data collection that necessitate innovative methodological approaches. Unlike consumer technology studies that can readily access thousands of participants through online panels, AT research involves highly specialized populations with significant constraints20,21. The limited population size means AT users represent a small percentage of the general population, with specific disability types further reducing available participants22. Access barriers are prevalent, as many potential participants face mobility, communication, or cognitive challenges that complicate traditional survey methods23. Additionally, guardians and family members are often overwhelmed with caregiving responsibilities, limiting research participation, while healthcare providers and special educators face heavy caseloads with minimal time for research participation24. These populations also experience survey fatigue, being frequently over-researched yet underserved, leading to participation reluctance.

Traditional power analyses indicate that testing TAM with multiple predictors requires minimum sample sizes of 100-150 participants25,26. However, recruitment efforts yielded 95 completed responses from 312 initial contacts (30% completion rate). While approaching the minimum threshold, this sample size motivated the synthetic data augmentation approach to achieve robust statistical power for detecting moderate effect sizes. Similar constraints have been reported across AT research, with de Witte et al.2 noting that limited data availability hampers evidence-based AT provision globally.

Two-Phase Solution

To overcome these endemic constraints while maintaining methodological rigor, an innovative two-phase approach was developed that maximizes the value of limited real-world data14,27.

Phase 1 consisted of an observational, cross-sectional pilot study with 95 real participants. Rather than attempting underpowered hypothesis testing, this phase was explicitly designed to extract empirical parameters including means, standard deviations, correlation structures, and qualitative themes that characterize distinct user personas in the AT context28. This approach acknowledges the reality that even small samples contain valuable information about population characteristics and relationships.

Phase 2 employed advanced computational methods to generate a synthetic dataset of 500 participants, providing the statistical power necessary for robust hypothesis testing. Critically, unlike simple random generation, a Gaussian copula method was used to preserve the correlation structure observed in the real data16,15. This methodological innovation ensures that relationships between variables were maintained in the synthetic dataset. For example, the strong correlation between Perceived Usefulness and Purchase Intention (r=0.839) observed in real data was preserved in synthetic generation.

This two-phase approach represents a pragmatic solution to a persistent challenge in disability and rehabilitation research: how to conduct statistically powered studies when the populations of interest face systematic barriers to large-scale participation29.

Participants

Phase 1: Real Sample

The initial real sample (n=95) was recruited through convenience sampling from AT-related professional networks and user communities in Seoul, South Korea. The sample composition reflected the accessibility of different stakeholder groups: professionals (n=65, 68%), primarily consisting of occupational therapists, special education teachers, and rehabilitation specialists; AT users (n=13, 14%), individuals with direct experience using assistive technologies for daily living; and guardians (n=17, 18%), family members responsible for AT decisions on behalf of users with disabilities. While this distribution was unbalanced, it provided sufficient variation to extract distinct persona parameters.

Phase 2: Synthetic Sample

The synthetic sample (n=500) was computationally generated to reflect a more representative distribution of the AT stakeholder ecosystem. Based on literature review and expert consultation about typical AT research populations, the synthetic sample was distributed as follows: AT users (n=210, 42.0%), guardians (n=161, 32.2%), and professionals (n=129, 25.8%). This adjusted distribution better represents the actual stakeholder proportions in AT adoption contexts, prioritizing the perspectives of end-users and their caregivers while maintaining representation of professional recommenders.

Demographic Information

Participants provided information about age, education level, monthly income, primary funding source for AT (government, insurance, self-funded, or other), and previous experience with assistive technologies.Core TAM Constructs.

TAM Constructs

All TAM constructs were measured using 5-point Likert scales (1=Strongly Disagree, 5=Strongly Agree). Core TAM constructs included Perceived Usefulness (3 items, α=0.85), measuring beliefs about functional improvement such as “This AT would improve my/the user’s daily functioning,” and Perceived Ease of Use (3 items, α=0.82), assessing learning and operational simplicity with items like “Learning to operate this AT would be easy for me/the user.”

Extended constructs included Technical Self-Efficacy (2 items), measuring confidence in ability to use new technologies; Social Influence (2 items), assessing the impact of important others’ opinions; Technology Anxiety (2 items), evaluating apprehension about using new technologies; Trust/Safety (2 items), examining confidence in AT reliability; Compatibility (2 items), assessing fit with lifestyle; Price Value (2 items), evaluating cost-benefit perceptions; and Support Conditions (2 items), measuring adequacy of training and support availability.

Qualitative Items

Two open-ended questions captured contextual factors: “What factors increase your confidence in adopting new AT?” and “What are the main reasons for AT abandonment in your experience?”

Data Collection and Synthetic Generation Procedures

Phase 1: Real Data Collection

Data collection occurred over a eight-week period (September-November 2024). The online survey was distributed through professional survey organization and AT user communities. Participants completed the survey anonymously taking approximately 15-20 minutes. Response rate was 30% (95 completed out of 312 accessed), reflecting the challenge of engaging these specialized populations30.

Phase 2: Correlation-Preserved Synthetic Data Generation

The synthetic data generation followed a rigorous methodology combining empirical parameters with literature-informed constraints31,32.

Step 1: Parameter and Correlation Extraction

The first step involved extracting comprehensive statistical information from the Phase 1 real data. For each persona group (Professional, AT User, Guardian), the mean and standard deviation for every TAM variable were calculated, creating distinct statistical profiles. Additionally, demographic distributions including age ranges, income levels, education categories, and funding sources were extracted. Most critically, the complete correlation matrix from the pooled real data was computed, capturing the relationships between all variables. Key correlations that needed preservation included Perceived Usefulness to Purchase Intention (r=0.839), Social Influence to Purchase Intention (r=0.634), Anxiety to Purchase Intention (r=0.595), and Trust/Safety to Purchase Intention (r=-0.609).

Step 2: Gaussian Copula Method for Correlation Preservation

To generate synthetic data that maintained these crucial relationships, the Gaussian copula method was employed15,33,34, a sophisticated statistical technique that preserves correlation structures while allowing flexibility in marginal distributions. The process began by generating multivariate normal random variables with the correlation structure observed in the real data. These correlated normal variables served as the foundation for maintaining relationships between constructs. Next, these normal variables were transformed to uniform distributions using the cumulative distribution function, creating what statisticians call a “copula,”a mathematical object that captures pure dependence structure independent of marginal distributions35.

Finally, inverse transformation was applied to map these uniform variables back to distributions matching target parameters for each persona. For Likert scale variables, this meant transforming to normal distributions with the observed means and standard deviations from Phase 1, then constraining values to the valid 1-5 range. This approach ensured that not only did individual variables match their empirical distributions, but the relationships between variables were preserved7.

Step 3: Literature-Informed Constraints

Beyond preserving statistical properties, theoretical constraints based on literature review were incorporated to ensure synthetic data reflected established AT adoption patterns. Following Cruz and Emmel11, it was ensured that higher income personas showed correspondingly higher purchase intentions to reflect the income-AT ownership relationship. Based on Li et al.5, social influence effects were varied according to persona visibility and cultural factors to account for stigma effects. Incorporating Dawe’s1 findings, complexity-related abandonment reasons were assigned more frequently to respondents with low technical competence to reflect real-world abandonment patterns.

Step 4: Validation Against Literature Expectations

The final step involved rigorous validation to ensure the synthetic data aligned with both empirical parameters from Phase 1 and theoretical predictions from literature. Verification confirmed that Perceived Usefulness emerged as the strongest predictor, consistent with TAM theory4; economic factors showed appropriate significance levels, as predicted by Cruz and Emmel11; support conditions demonstrated importance, aligning with de Witte et al.’s2 framework; and persona differences were maintained in patterns consistent with Lee and Templeton’s19 findings.

Statistical Analysis

Data analysis was conducted using Python 3.9 with pandas (v1.3.0), NumPy (v1.21.0), scipy (v1.7.0), and statsmodels (v0.12.2) libraries. The analytical approach differed between phases.

Phase 1 Analysis

Given the small sample size, Phase 1 analysis focused on descriptive statistics (means, standard deviations, ranges) by persona group, frequency distributions for categorical variables, and qualitative content analysis of open-ended responses using thematic coding36.

Phase 2 Analysis

The powered synthetic dataset enabled comprehensive statistical testing. Descriptive statistics were calculated for all variables, both overall and by persona. Pearson correlation matrices examined bivariate relationships between all TAM constructs. Multiple linear regression was employed for primary hypothesis testing using the model: 

Purchase Intention = β0 + β1(PU) + β2(PEOU) + β3(TSE) + β4(SI) + β5(ANX) + β6(TRUST) + β7(RA) + … + ε37

Path analysis tested the core TAM relationships (PEOU→PU, PU→PI, PEOU→PI). Variance Inflation Factor (VIF) analysis ensured predictor independence. To assess the stability of regression coefficients and model fit statistics, bootstrap validation was conducted with 1,000 iterations. For each iteration, a random sample of n=500 was drawn with replacement from the synthetic dataset, fitted the full regression model, and recorded all coefficients and R² values. I  calculated 95% confidence intervals using the percentile method (2.5th and 97.5th percentiles of the bootstrap distribution). This approach provides a robust assessment of parameter stability that does not rely on parametric assumptions and accounts for potential sampling variability in the synthetic data generation process.

Power Analysis

Post-hoc power analysis confirmed that n=500 provided power >0.99 to detect the observed effect size (R²=0.710) with α=0.05 for the 6-predictor regression model38,39,40.

Ethical Considerations

This research was conducted in accordance with ethical guidelines for survey research. Phase 1 received institutional approval with informed consent from all participants. The synthetic data generation process ensures complete anonymity as no individual-level real data is identifiable in the synthetic dataset. All data and analysis scripts will be made publicly available for reproducibility and transparency.

Results

Phase 1: Real Data Analysis

Sample Characteristics and TAM Profiles

Table 2 presents the demographic characteristics and TAM variable scores for the pilot sample (n=95).

Professional (n=15)AT User (n=3)Guardian (n=2)
Demographics
Age, M (SD)
Income (million KRW), M (SD)
Graduate Education (%)
Government Funding (%)

36.6 (4.5)
5.12 (0.61)
0.0%
52.3%

43.2 (14.1)
2.21 (0.55)
0.0%
76.9%

57.6 (5.2)
5.89 (0.61)
0.0%
0.0%
TAM Variables, M (SD)
Perceived Usefulness
Perceived Ease of Use
Technical Self-Efficacy
Anxiety
Social Influence
Trust/Safety
Purchase Intention

3.46 (0.59)
4.57 (0.50)
4.68 (0.47)
3.43 (0.61)
1.52 (0.53)
4.42 (0.50)
3.46 (0.59)

4.69 (0.48)
2.92 (1.26)
2.85 (0.80)
4.38 (0.51)
4.38 (0.77)
2.46 (0.52)
4.38 (0.51)

4.65 (0.49)
1.41 (0.51)
1.35 (0.49)
4.71 (0.47)
4.59 (0.51)
3.24 (0.75)
4.59 (0.51)
Table 2 | Phase 1 Sample Characteristics and TAM Variables by Persona
Note: All TAM variables measured on 5-point Likert scales

Correlation Structure in Real Data

Table 3 shows the correlation matrix from the pooled real data, revealing strong relationships that informed synthetic data generation.

Variablerp-value
Perceived Usefulness0.850<0.001
Social Influence0.800<0.001
Anxiety0.757<0.001
Trust/Safety-0.720<0.001
Perceived Ease of Use-0.670<0.001
Technical Self-Efficacy-0.660<0.001
Table 3 | Correlations with Purchase Intention (Phase 1, n=95)

These correlations contradicted initial assumptions about need-driven adoption and provided the foundation for correlation-preserved synthetic data generation.

Phase 2: Synthetic Data Analysis

Validation of Correlation Preservation

Table 4 demonstrates successful preservation of correlation structure in the synthetic data.

Variable RelationshipReal rSynthetic rDifference
PU → Purchase Intention0.8500.754-0.096
SI → Purchase Intention0.8000.783-0.017
Anxiety → Purchase Intention0.7570.724-0.034
Trust → Purchase Intention-0.720-0.644+0.076
PEOU → Purchase Intention-0.670-0.678-0.009
Table 4 | Correlation Preservation: Real vs. Synthetic Data (Mean Absolute Deviation: 0.043)

Descriptive Statistics

Table 5 presents descriptive statistics for the synetic sample consisted of 500 respondents

VariableMeanSDMinMaxSkewnessKurtosis
Core TAM
Perceived Usefulness
Perceived Ease of Use

4.35
2.88

0.73
1.44

2.00
1.00

5.00
5.00

-0.80
+0.14

-0.19
-1.38
Extended Constructs
Technical Self-Efficacy
Social Influence
Anxiety
Trust/Safety

2.84
4.25
3.71
3.22

1.35
0.75
1.39
1.02

1.00
2.00
1.00
1.00

5.00
5.00
5.00
5.00

+0.22
-0.67
-0.81
+0.17

-1.16
-0.21
-0.72
-0.84
Outcomes
Purchase Intention
4.170.732.005.00-0.39-0.64
Table 5 | Descriptive Statistics for Synthetic Sample (n=500)

Note: (a) VIF values exceed conventional thresholds due to inherent TAM construct intercorrelations; bootstrap validation confirmed coefficient stability (b) Variables include Latent Constructs (PU, PEOU, etc.) measured with scales, and Demographics (Persona, Experience, Age) used for grouping.

Multiple Regression Analysis 

The following findings are derived from the synthetic dataset (n=500) generated using correlation structures extracted from the Phase 1 real sample (n=95). These results should be interpreted as hypothesis-generating rather than confirmatory, pending replication with independent real samples.

Based on TAM theory and AT-specific considerations, I tested the following hypotheses:
H1: Perceived Usefulness positively predicts Purchase Intention (Core TAM)
H2: Perceived Ease of Use positively predicts Purchase Intention (Core TAM);
H3: Perceived Ease of Use positively predicts Perceived Usefulness (TAM mediating path);
H4: Social Influence positively predicts Purchase Intention (Extended TAM/UTAUT);
H5: Trust and Safety positively predicts Purchase Intention (AT-specific);
H6: Technical Self-Efficacy moderates the PEOU-PI relationship (Individual differences);
H7: Technology Anxiety negatively predicts Purchase Intention (Barrier factor).
The hierarchical regression approach allowed systematic testing of these hypotheses by entering variable blocks sequentially.

Table 6 presents the full regression results predicting purchase intention. The model explained 71.0% of variance (R²=0.710, F(6,493)=200.97, p<0.001). Variance Inflation Factors exceeded conventional thresholds (max VIF=90.6). Bootstrap validation with 1,000 iterations confirmed the stability of my model. The R² confidence interval was narrow (95% CI [0.669, 0.756]), and the two strongest predictors showed stable effects that did not cross zero: Social Influence (β = 0.408, 95% CI [0.319, 0.498]) and Perceived Usefulness (β = 0.325, 95% CI [0.227, 0.415]). Trust/Safety also showed a stable negative effect (β = -0.105, 95% CI [-0.189, -0.019]). Non-significant predictors (PEOU, Technical Self-Efficacy, Anxiety) had confidence intervals crossing zero, as expected. These results demonstrate that key findings are robust and not artifacts of model specification.

To verify model generalizability, I conducted split-sample validation by randomly partitioning the synthetic dataset into training (70%, n=350) and testing (30%, n=150) subsets. The model trained on the training subset yielded R²=0.706, while application to the held-out test subset produced R²=0.710, demonstrating excellent cross-validation consistency (CV ratio=1.006). This confirms that findings are not artifacts of overfitting to the synthetic data structure.

PredictorBSEβtp-valueVIF
(Intercept)1.6830.3804.43<0.001***
Perceived Usefulness0.3230.0420.3257.71<0.001***90.6
Perceived Ease of Use-0.0410.033-0.081-1.230.21935.8
Social Influence0.3970.0400.40810.00<0.001***73.8
Trust/Safety-0.0750.032-0.105-2.340.019*13.6
Technical Self-Efficacy-0.0490.042-0.090-1.150.24945.8
Anxiety-0.0280.036-0.054-0.780.43844.0
Table 6 | Multiple Regression Results for Purchase Intention (n=500).

Note: *p < 0.05, **p < 0.01, ***p < 0.001. VIF values exceed conventional thresholds due to inherent TAM construct intercorrelations. Bootstrap validation (1,000 iterations) confirmed coefficient stability for significant predictors: R² 95% CI [0.669, 0.756].

Model Summary:

  • R² = 0.710* (71.0% of variance explained)
  • Adjusted R² = 0.706
  • F(6,493) = 200.97, p < 0.001*

Core TAM Path Analysis

Table 7 presents separate regression analyses testing fundamental TAM relationships.

PathβSEtp-value
H1: PU → PI-0.644***0.017-18.79<0.0010.415***
H2: PEOU → PI0.754***0.02925.60<0.0010.568***
H3: PEOU → PU-0.678***0.017-20.61<0.0010.460***
Table 7 | Core TAM Path Analysis Results

Note: (a) *p < 0.05, **p < 0.01, ***p < 0.001(b) PU = Perceived Usefulness, PEOU = Perceived Ease of Use, PI = Purchase Intention
The negative relationships involving PEOU represent a key finding distinguishing AT from consumer technology contexts.

Persona Comparisons

Table 8 compares key variables across personas, revealing significant differences in antecedents but not outcomes.

VariableAT User (n=210)Guardian (n=161)Professional (n=129)Fp-valueη²
Perceived Ease of Use2.92 (1.20)1.50 (0.54)4.53 (0.53)428.63***<0.0010.63
Technical Self-Efficacy2.82 (0.79)1.45 (0.51)4.60 (0.51)867.10***<0.0010.78
Anxiety4.38 (0.69)4.52 (0.54)1.60 (0.59)1011.37***<0.0010.80
Purchase Intention4.33 (0.57)4.53 (0.55)3.45 (0.66)134.95***<0.0010.35
Table 8 | ANOVA Results: Persona Differences in Synthetic Data

Despite dramatic differences in perceived ease of use (η² = 0.63) and technical competence (η² = 0.78), purchase intention shows significant variation across personas (η² = 0.35), with Guardians showing highest intentions.

Discussion

This two-phase investigation of assistive technology acceptance yielded unexpected but theoretically important findings. When correlation structures from real data were preserved in synthetic generation, the Technology Acceptance Model explained substantial variance in AT adoption intentions (R² = 0.710, p < 0.001). However, the nature of these relationships differed markedly from consumer technology contexts.

Most notably, social influence emerged as the strongest predictor (β = 0.408, p < 0.001), followed by perceived usefulness (β = 0.325, p < 0.001). Perceived ease of use showed a negative relationship (β = -0.081), though this effect was not statistically significant (p = 0.219). Trust/Safety showed a small negative effect (β = -0.105, p = 0.019), while Technical Self-Efficacy and Anxiety added no significant explanatory power beyond the core TAM constructs.

The significant model performance (R² = 71.0%) demonstrates that TAM constructs do predict AT adoption, contrary to initial assumptions about purely need-driven adoption41. However, the directionality and relative importance of relationships reveal fundamental differences in how AT is evaluated compared to consumer technologies.

The dominance of social influence (β = 0.408) as the strongest predictor challenges the traditional TAM emphasis on perceived usefulness as the primary adoption driver. This finding aligns with Li et al.’s5 emphasis on social factors in AT adoption, but extends it by demonstrating that social influence outweighs even functional utility perceptions. In the AT context, recommendations from healthcare providers, family members, and peers appear to carry more weight than individual assessments of device utility.

However, my Korean sample showed positive social influence effects, contrasting with Li et al.’s findings of stigma-driven avoidance in China. This suggests important cultural variations in how social factors influence AT adoption, warranting cross-cultural investigation. The collectivist orientation of Korean culture may amplify the positive effects of social recommendations while reducing stigma concerns.

The negative PEOU effect (β = -0.081), though not statistically significant, suggests an intriguing pattern that warrants further investigation: devices perceived as “too easy” may be seen as less sophisticated or less capable of addressing complex needs. This aligns with Spinelli et al.’s6 findings about identity and stigma. Users may associate simpler devices with greater disability visibility.

Qualitative observations from pilot interviews support this interpretation. Thematic analysis of open-ended responses revealed three recurring patterns: (1) concerns that simple devices signal limited capability and may not address complex needs, (2) desire to demonstrate technical competence through device sophistication, and (3) stigma associations with overly simplified designs that might indicate greater disability severity. One occupational therapist noted: ‘When a device looks too simple, families worry it won’t handle their child’s complex needs.’ An AT user expressed: ‘I want technology that shows I’m capable of learning something sophisticated, not something that makes me look helpless.’ A guardian commented: ‘The simpler devices felt like they were for much more severe cases than my son.’ These perspectives suggest that perceived simplicity may carry stigmatizing connotations in the AT context. 

Technical Self-Efficacy and Anxiety added no significant explanatory power when Social Influence and Perceived Usefulness were already in the model (ΔR² = 0.001, p = 0.502). This finding suggests that need-driven effects may override traditional psychological barriers: those with higher technology anxiety or lower self-efficacy may have more urgent needs for assistive support, rendering their apprehensions secondary to functional necessity.

The ANOVA results confirmed distinct persona profiles while revealing a crucial paradox: despite dramatic differences in technical confidence (η² = 0.78 for self-efficacy, with Professionals M=4.60 vs. Guardians M=1.45), all groups showed high purchase intentions. This pattern supports the theoretical proposition that different stakeholders reach similar adoption decisions through different psychological mechanisms: professionals through evidence evaluation, users through functional need, and guardians through caregiving imperatives.

A critical methodological insight emerged from comparing approaches to synthetic data generation. Initial attempts using independent variable generation (ignoring correlation structures) yielded poor results (R² = 0.024), while correlation-preserved generation using Gaussian copula methods produced highly significant findings (R² = 0.710). This 30-fold increase in explained variance demonstrates that preserving empirical correlation structures is essential for valid synthetic data generation in behavioral research.

To validate this methodological choice, I benchmarked the Gaussian copula approach against parametric bootstrap, which resamples from fitted normal distributions but ignores correlation structure. The parametric approach yielded R²=0.010 (99% lower than copula), confirming that correlation preservation is not merely beneficial but essential. Simple parametric matching of marginal distributions proves dramatically insufficient for behavioral data where construct relationships carry the critical information.

Quantitative comparison of coefficients between correlation-preserved and independent-generation methods reveals substantial differences: Perceived Usefulness β changed by +1,254% (0.024 to 0.325), Social Influence by +920% (0.040 to 0.408), and the model R² increased by +2,858% (0.024 to 0.710). These dramatic percentage changes underscore the critical importance of preserving empirical correlation structures in synthetic data generation.

This finding has implications beyond the current study. Researchers increasingly use synthetic data to address small sample limitations, but my results show that simple parametric generation (matching means and standard deviations) is insufficient. The correlation structure carries crucial information about construct relationships that must be preserved for valid inference.

The two-phase approach demonstrates how empirical data collection and literature-grounded synthetic expansion can be effectively combined. By using Phase 1 data to extract not just parameters but correlation structures, and constraining synthetic generation with literature-based expectations (Table 1), I created a dataset that is both empirically grounded and theoretically informed. This hybrid approach offers a template for research in specialized populations where large-scale data collection is infeasible.

The dominance of social influence (β = 0.408) over perceived usefulness (β = 0.325) challenges conventional design wisdom that prioritizes functional optimization. While functional capability remains important, the findings suggest that AT adoption is fundamentally a socially-mediated decision. This has profound implications for design strategy.

The negative (though non-significant) relationship between perceived ease of use and adoption warrants careful interpretation. Rather than simplifying interfaces at all costs, AT designers should consider that users may interpret complexity as capability. This doesn’t advocate for unnecessarily complicated designs, but suggests that visible sophistication may actually increase adoption by signaling that the device can address complex needs. Design should balance genuine usability with visible capability, interfaces that are intuitive to use but sophisticated in appearance.

These findings carry concrete implications for AT policy and funding. First, rehabilitation agencies should prioritize social influence interventions, training healthcare providers and peer mentors to actively recommend AT, as results show social influence (β = 0.408) substantially outweighs perceived ease of use in adoption decisions. Second, AT funding bodies should consider subsidizing ‘gateway’ technologies that allow users to demonstrate technical competence, addressing the stigma-complexity dynamic revealed in qualitative data. Third, evidence-based AT provision guidelines could incorporate synthetic data methodologies to generate preliminary adoption estimates for novel technologies before costly large-scale trials, potentially accelerating the evidence-to-practice pipeline by 2-3 years based on typical AT study timelines.

The strong effects of social influence (β = 0.408) and perceived usefulness (β = 0.325) suggest clear intervention priorities. Implementation should focus heavily on two areas:

First, building robust social recommendation networks. Given that social influence is the strongest predictor, implementation strategies should prioritize training healthcare providers, peer mentors, and family members as AT advocates. Peer support programs and user communities may be more effective than traditional training approaches focused on individual skill-building.

Second, demonstrating concrete improvements in daily functioning with persona-specific evidence. Professionals need clinical efficacy data, users need peer testimonials showing real-world impact, and guardians need safety records and burden-reduction evidence. Marketing and education materials should be tailored to these distinct evidence needs rather than providing generic information.

The negligible effects of Technical Self-Efficacy and Anxiety (ΔR² = 0.001, p = 0.502) suggest these traditional barriers are less critical than social and functional factors. Implementation should provide robust support systems but need not delay deployment to build confidence or reduce anxiety, need urgency appears to override these psychological barriers.

The literature reports 30-50% AT abandonment rates1, yet my synthetic data showed high continuance intentions across all personas. This gap suggests that intention-behavior relationships may be weaker in AT than consumer technology contexts. Post-adoption factors not captured in acceptance models such as device reliability, evolving needs, inadequate support, environmental changes, which may drive abandonment more than initial acceptance barriers.

This finding suggests that resources currently invested in pre-adoption confidence-building might be better allocated to post-adoption support: periodic needs reassessment, adaptation assistance, technical support, and ongoing training. Preventing abandonment may require sustained engagement rather than optimizing initial acceptance.

Several limitations warrant consideration. First, while 95 real participants (65 professionals, 13 AT users, 17 guardians) represent a significant improvement over typical AT studies, the sample remains modest for 13-variable correlation estimation, and parameter estimates should be interpreted with appropriate caution. The strong correlations observed might be inflated by sample size42, though bootstrap validation showed stable confidence intervals for the strongest effects.

Second, while correlation preservation improved model performance dramatically (30-fold increase in R²), synthetic data cannot capture unmeasured variables or non-linear relationships not observed in the pilot sample43. The Gaussian copula preserves linear correlations (verified through Pearson-Spearman comparison showing differences <0.025), but may not fully capture non-linear relationships or higher-order interactions present in real data.

Third, data collection in Seoul, South Korea may limit generalizability. The positive social influence effects may reflect Korean collective culture rather than universal AT adoption patterns. Cross-cultural validation is needed to determine whether these findings represent fundamental AT adoption dynamics or culturally-specific patterns.

Fourth, intentions were measured rather than actual adoption behavior. The intention-behavior gap may be particularly large in AT contexts where practical barriers (cost, availability, insurance coverage) or post-adoption factors (reliability issues, changing needs) override initial intentions. Longitudinal studies tracking actual adoption and continued use are needed.

The priority for future studies is replication with larger real samples (n ≥ 150) to validate the unexpected relationships with adequate statistical power37. Particular attention should be paid to the negative PEOU relationship, which showed consistent direction across analyses but failed to reach significance. With a larger sample, the true nature and significance of this relationship can be definitively established.

Comparing Korean results with Western samples would clarify whether negative PEOU effects and the dominance of social influence are culturally specific or universal to AT adoption. Given the collectivist-individualist cultural dimension, Western samples might show weaker social influence effects and different PEOU dynamics.

Longitudinal behavioral studies tracking actual adoption and continued use would validate whether high intentions translate to behavior and identify critical abandonment points44. Understanding the intention-behavior gap is particularly important in AT contexts where financial, practical, and ongoing support factors may intervene between intention and sustained use.

Testing alternative theoretical models may provide better frameworks. UTAUT2, which includes hedonic motivation and habit, or AT-specific models incorporating disability severity, urgency of need, stigma concerns, and caregiver dynamics may capture additional variance45. The current TAM framework explains 71% of variance, understanding the remaining 29% could reveal additional intervention opportunities.

Finally, experimental validation through randomized trials would test causal mechanisms. Does complexity signaling actually increase adoption compared to simplicity-focused designs? Do peer support programs leveraging social influence improve outcomes compared to traditional training? Such pragmatic trials would provide actionable evidence for AT implementation46.

This two-phase study provides novel evidence about assistive technology acceptance. By preserving correlation structures from empirical data in synthetic generation, I demonstrated that TAM does predict AT adoption (R² = 71.0%), but through unexpected pathways. Social influence emerged as the dominant predictor (β = 0.408), surpassing even perceived usefulness (β = 0.325), while perceived ease of use showed a counterintuitive negative trend. These patterns suggest AT adoption follows a fundamentally different logic than consumer technology: one driven by social recommendations and capability signaling rather than by ease-of-use optimization.

The methodological contribution, demonstrating that correlation preservation is essential for valid synthetic data generation, has implications beyond AT research. As researchers increasingly use synthetic data to address small sample limitations47, these findings emphasize that preserving empirical relationship structures is as important as matching distributional parameters. Simple parametric generation proves dramatically insufficient (99% reduction in R²), underscoring the necessity of correlation-preserved approaches.

These findings challenge both theoretical assumptions and practical conventions in AT design and implementation. Rather than simplifying interfaces and building technical confidence, successful AT adoption may require demonstrating sophisticated capability while leveraging social recommendation networks. Different stakeholder groups reach similar high adoption intentions through distinct psychological pathways, suggesting that persona-specific implementation strategies may be more effective than universal approaches.

Future research should validate these unexpected relationships with larger real samples and explore whether they represent universal AT adoption patterns or cultural specificities. Only through such continued investigation can evidence-based strategies be developed to improve AT adoption and ultimately enhance quality of life for individuals with disabilities.

References

  1. Dawe, M. (2006). Desperately seeking simplicity: How young adults with cognitive disabilities and their families adopt assistive technologies. In R. Grinter et al. (Eds.), Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1143-1152). ACM. https://doi.org/10.1145/1124772.1124943 [] [] [] [] []
  2. de Witte, L., Steel, E., Gupta, S., Delgado Ramos, V., & Roentgen, U. (2018). Assistive technology provision: Towards an international framework for assuring availability and accessibility of affordable high-quality assistive technology. Disability and Rehabilitation: Assistive Technology, 13, 467-472. https://doi.org/10.1080/17483107.2018.1470264 [] [] [] []
  3. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13, 319-340. https://doi.org/10.2307/249008 [] []
  4. King, W. R., & He, J. (2006). A meta-analysis of the technology acceptance model. Information & Management, 43, 740-755. [] [] []
  5. Li, F. M., Chen, D. L., Fan, M., & Truong, K. N. (2021). “I choose assistive devices that save my face”: A study on perceptions of accessibility and assistive technology use conducted in China. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3411764.3445321 [] [] [] [] []
  6. Spinelli, G., Micocci, M., Martin, W., & Wang, Y. (2019). From medical devices to everyday products: Exploring cross-cultural perceptions of assistive technology. Design for Health, 3, 324-340. https://doi.org/10.1080/24735132.2019.1680065 [] [] [] [] []
  7. Sella, E., Guinot, L., Lagrange, E., et al. (2025). MIIC-SDG: Multivariate information-based synthetic data generation for small samples. npj Digital Medicine. [] [] []
  8. Iancu, I., & Iancu, B. (2017). Elderly in the digital era: Theoretical perspectives on assistive technologies. Technologies, 5, 60. https://doi.org/10.3390/technologies5030060 [] [] [] []
  9. Goodarzi, F., Barati, M., Bashirian, S., Ayubi, E., Rahbar, S., & Cheraghi, P. (2024). The experiences of the elderly regarding the use of rehabilitation assistive technologies: A directed qualitative content analysis. Disability and Rehabilitation: Assistive Technology, 19, 2857-2868. https://doi.org/10.1080/17483107.2024.2313081 [] []
  10. van der Heijden, H., Verhagen, T., & Creemers, M. (2003). Understanding online purchase intentions: Contributions from technology and trust perspectives. European Journal of Information Systems, 12, 41-48. https://doi.org/10.1057/palgrave.ejis.3000445 [] [] [] []
  11. Cruz, D. M. C., & Emmel, M. L. G. (2013). Associations among occupational roles, independence, assistive technology, and purchasing power of individuals with physical disabilities. Revista Latino-Americana de Enfermagem, 21, 484-491. https://doi.org/10.4090/S0104-11692013000200003 [] [] [] [] []
  12. Summers, M. P., & Verikios, G. (2018). Assistive technology pricing in Australia: is it efficient and equitable? Australian Health Review, 42(1), 100-110. https://doi.org/10.1071/AH16042 [] []
  13. Chang, T., & Huang, S. (2022). Factors influencing the reputation of Assistive Technology Resources Center: An example from Yunlin County, Taiwan. Healthcare, 10, 243. https://doi.org/10.3390/healthcare10020243 [] []
  14. Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE Publications. [] [] []
  15. Nelsen, R. B. (2006). An introduction to copulas. Springer Series in Statistics. [] [] [] []
  16. Restrepo, B. J., Rivera, J. C., Laniado, H., Osorio, R., & Becerra, O. (2023). Nonparametric generation of synthetic data using copulas. Electronics, 12, 1601. [] [] []
  17. Huang, W., Kao, C., & Su, Y. (2017). An empirical study on the purchasing behavior for telehealthcare systems in Taiwan. Journal of Business Strategy, 9, 67-88. https://doi.org/10.3966/207321472017060902001 [] [] [] [] [] [] []
  18. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27, 425-478. [] []
  19. Lee, H., & Templeton, R. (2008). Ensuring equal access to technology: Providing assistive technology for students with disabilities. Theory Into Practice, 47, 212-219. https://doi.org/10.1080/00405840802153874 [] [] [] []
  20. Fok, D., Henry, D., & Allen, J. (2015). Research designs for intervention research with small samples II: Stepped wedge and interrupted time-series designs. Prevention Science, 16, 967-977. [] []
  21. Graham, J. E., Karmarkar, A. M., & Ottenbacher, K. J. (2012). Small sample research designs for evidence-based rehabilitation: Issues and methods. Archives of Physical Medicine and Rehabilitation, 93, 8. [] []
  22. Loeb, M. E., & Eide, A. H. (2012). International comparability of disability prevalence estimates: Impact of measurement approaches. Disability and Rehabilitation. []
  23. Shariq, S., Cardoso Pinto, A. M., Budhathoki, M., et al. (2023). Barriers and facilitators to recruiting disabled people to clinical trials: A scoping review. Trials. []
  24. Goedhart, N. S., Pittens, C. A., et al. (2021). Engaging hard-to-reach populations in research: A scoping review. Research Involvement and Engagement. []
  25. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2018). Multivariate data analysis (8th ed.). Cengage Learning. []
  26. Memon, M. A., Ting, H., Cheah, J. H., Ramayah, T., Chuah, F., & Cham, T. H. (2020). Sample size for survey research: Review and recommendations. Journal of Applied Structural Equation Modeling, 4, 1-20. []
  27. Ivankova, N. V., Creswell, J. W., & Stick, S. L. (2006). Using mixed-methods sequential explanatory design: From theory to practice. Field Methods, 18, 3-20. []
  28. Malmqvist, J., et al. (2019). Conducting pilot studies: A methodological primer. International Journal of Qualitative Methods. []
  29. Fetters, M. D., Curry, L. A., & Creswell, J. W. (2013). Achieving integration in mixed methods designs – principles and practices. Health Services Research, 48, 2134-2156. []
  30. Bould, K., Tate, R., Simpson, G., et al. (2023). Recruitment protocol challenges in assistive technology studies. JMIR Research Protocols. []
  31. Dankar, F. K., Ibrahim, M. K., & Ismail, L. (2022). A multi-dimensional evaluation framework for synthetic data generation. IEEE Access. []
  32. Hernandez, M., Epelde, G., Alberdi, A., Cilla, R., & Rankin, D. (2023). Validation methods for synthetic healthcare data. Methods of Information in Medicine. []
  33. Sklar, A. (1959). Fonctions de repartition a n dimensions et leurs marges. Publications de l’Institut de Statistique de l’Universite de Paris, 8, 229-231. []
  34. Haugh, M. (2016). An introduction to copulas. Columbia University Technical Report. []
  35. Li, Z., Zhao, X., & Fu, Y. (2020). SynC: A unified framework for generating synthetic populations with Gaussian copula. IEEE International Conference on Data Mining Workshops. []
  36. Arain, M., Campbell, M. J., Cooper, C. L., & Lancaster, G. A. (2010). What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Medical Research Methodology, 10, 67. []
  37. Kline, R. B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford Press. [] []
  38. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates. []
  39. Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191. []
  40. Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149-1160. []
  41. Khechine, H., Lakhal, S., & Ndjambou, P. (2016). A meta-analysis of the UTAUT model: Eleven years later. Canadian Journal of Administrative Sciences, 33, 138-152. []
  42. Westland, J. C. (2010). Lower bounds on sample size in structural equation modeling. Electronic Commerce Research and Applications, 9, 476-487. []
  43. Fujimoto, Y., Ishikawa, A., & Mizuno, T. (2022). Copula-based synthetic data generation for economic analysis. Review of Socionetwork Strategies. []
  44. Fleming, R., & Sum, S. (2014). Empirical studies on the effectiveness of assistive technology in the care of people with dementia: A systematic review. Journal of Assistive Technologies. []
  45. Shin, D. H., Um, J. Y., Yoon, J. H., Choi, J. H., Shin, S. Y., Lee, Y. S., & Kim, J. H. (2023). Development and validation of a Senior Technology Acceptance Model. Journal of Medical Internet Research. []
  46. Wiese, L. K., et al. (2024). Understanding adoption of mobility assistive products: A systematic review. Disability and Rehabilitation: Assistive Technology. []
  47. Khosravi, P., et al. (2024). Binary Gaussian copula synthesis improves dialysis prediction in chronic kidney disease. Journal of Medical Internet Research. []

LEAVE A REPLY

Please enter your comment!
Please enter your name here