run-acceptance-tests
by hashicorpGuide for the run-acceptance-tests skill for Terraform provider acceptance testing. Use it to run focused `TestAcc` tests, handle required environment variables securely, and debug failures with a clear step-by-step workflow.
This skill scores 78/100, which means it is a solid but focused listing for directory users: it gives enough explicit workflow guidance to trigger acceptance-test runs correctly and reduce guesswork, though it is not a broad or highly polished operational guide.
- Explicit triggerability for Terraform acceptance tests, including the `TestAcc` prefix and `TF_ACC=1` requirement.
- Concrete run-and-diagnose workflow: retry with `-count=1`, then `-v`, then `TF_LOG=debug`, then optional workspace persistence.
- Useful remediation advice when provider-specific environment variables are missing, which helps agents recover from common execution failures.
- Single-purpose and somewhat narrow: it is aimed at running Terraform provider acceptance tests, not general test automation.
- No supporting scripts, references, or examples in the repository tree, so agents must rely on the prose instructions alone.
Overview of run-acceptance-tests skill
What this skill does
The run-acceptance-tests skill helps you run Terraform provider acceptance tests correctly, especially tests named with the TestAcc prefix. It is designed for the run-acceptance-tests for Acceptance Testing workflow, where the main job is not just “run a test,” but run it with the right environment, interpret failures, and know when extra provider-specific setup is required.
Who should use it
Use the run-acceptance-tests skill if you are working on a Terraform provider and need a practical run-acceptance-tests guide for local validation, CI troubleshooting, or reproducing a flaky result. It is most useful when you already have a specific acceptance test in mind and need a reliable way to execute it without guessing at flags or environment variables.
What makes it different
This skill is opinionated about the sequence that matters: start with a focused go test -run=... invocation, then add -count=1, -v, TF_LOG=debug, and workspace persistence only when the first run is not enough. That makes the run-acceptance-tests skill better than a generic prompt because it encodes a diagnostic ladder instead of asking you to improvise.
How to Use run-acceptance-tests skill
Install the skill
Install the run-acceptance-tests skill with:
npx skills add hashicorp/agent-skills --skill run-acceptance-tests
If you are evaluating run-acceptance-tests install for a Terraform provider workflow, confirm that your environment can run Go tests and that you can safely set provider credentials when needed. The skill assumes an acceptance-testing context, not a standalone demo project.
Give the skill a precise test target
The best input is a concrete TestAcc name, not a vague request like “check the provider tests.” For example, ask for something like: “Run TestAccFeatureHappyPath and diagnose any missing env vars.” The skill works best when the target test name, provider, and expected behavior are explicit.
Start with the right files and signals
Begin with SKILL.md, then inspect the repository’s README.md, AGENTS.md, metadata.json, and any supporting rules/, resources/, references/, or scripts/ folders if they exist. In this repository, the main guidance is concentrated in SKILL.md, so file-tree inspection matters less than in larger skills, but it is still useful to confirm there are no hidden helper files.
Follow the execution and debug ladder
For a normal run, use TF_ACC=1 go test -run=TestAccFeatureHappyPath and keep output non-verbose first. If the test fails, retry with -count=1 to avoid cached results, then add -v, then TF_LOG=debug, and only then consider TF_ACC_WORKING_DIR_PERSIST=1 to inspect Terraform state between steps. This staged workflow is the core of the run-acceptance-tests usage pattern.
run-acceptance-tests skill FAQ
Is this only for Terraform provider acceptance tests?
Yes. The run-acceptance-tests skill is scoped to Terraform provider acceptance testing, especially Go tests that use the TestAcc naming convention. It is not meant for unit tests, generic Go test suites, or unrelated infrastructure checks.
What if the test needs extra environment variables?
That is expected. The skill explicitly assumes some providers require additional environment variables and tells you to surface missing variables from test output and set them up securely. If credentials or endpoints are missing, treat that as part of the run-acceptance-tests guide, not as an error in the skill itself.
Do I need this instead of a normal prompt?
Use the skill when you want a repeatable procedure, not just a one-off answer. A normal prompt may tell you to run a test; the run-acceptance-tests skill tells you which flags, environment variables, and escalation steps to use when the first attempt fails or when a passing test still needs verification.
Is it beginner-friendly?
Yes, if you can already run Go commands and understand basic environment variables. The skill reduces guesswork for newcomers by starting with a focused command and clear debug escalation, but it still expects you to recognize provider-specific credentials, Terraform behavior, and test naming conventions.
How to Improve run-acceptance-tests skill
Provide stronger test context
The most useful inputs are the exact test name, the provider package, and the symptom you are trying to reproduce. “Run acceptance tests” is too broad; “Run TestAccResourceBasic in the internal/provider package and investigate a missing TF_LOG clue” gives the skill enough context to choose the right path quickly.
Share failure details, not just failure status
If the first run fails, include the full test output, missing-variable messages, and whether the result changed after -count=1 or -v. The run-acceptance-tests skill improves when you feed it the actual failure shape, because provider acceptance tests often fail for different reasons: auth, remote API readiness, state drift, or test flakiness.
Use the debug options in the right order
Ask for the minimum necessary escalation first, then expand only if the evidence demands it. For run-acceptance-tests usage, that usually means starting with a single focused TestAcc name, then adding verbose output, debug logs, or workspace persistence only after you know what you need to inspect.
Iterate on the test, not just the command
If you need to confirm a passing test is meaningful, change one check or one step and rerun it rather than only repeating the same command. That makes the run-acceptance-tests skill more valuable for Acceptance Testing because it helps you distinguish a real pass from a false negative and tighten the test’s signal over time.
