Skip to main content

T0/T1/T2 Tests

Overview

We have many tests to verify platforms and it can be overwhelming to know which ones to prioritize. This page outlines different tests categorized by type and priority.

The different priorities are as follows:

  • T0 - Test cases that check simple yet critical functionality and are very important to enable other tests to pass.
  • T1 - Test cases that are a little more complicated and will help verify overall functionality.
  • T2 - Test cases that are either for complicated features or related to performance tuning. Normally, they will not block FBOSS bring up on new platforms at early stages, but they will block platform qualification before deploying in production networks.

T0/T1/T2 tests are only a subset of all tests. This will help you work through and debug issues to eventually achieve a 100% pass rate. We recommend passing all T0 test cases first, especially for New Platform Onboarding EVT exit, then T1, then T2 to make this process more organized. Then, you can run all tests to work on achieving a 100% pass rate. This means not specifying T0/T1/T2, a filter file, or a gtest filter so that the binary runs all tests.

Even though we recommend following this process to make testing more organized, feel free to run all tests and work through them however you please.

Test Configuration Variables

When running tests with run_test.py, you may need to specify the following variables:

  • $CONFIG: Path to the hardware test configuration file for your platform
  • $QSFP_CONFIG: Path to the QSFP test configuration file for your platform
  • $ASIC: ASIC identifier for production features filtering. Available ASICs can be found in ./share/production_features/asic_production_features.materialized_JSON under the asicToFeatureNames key.
  • $KEY: Test configuration key for skipping known bad or unsupported tests, used with --skip-known-bad-tests. The key format is: vendor/coldboot-sai/warmboot-sai/asic (e.g., brcm/8.2.0.0_odp/8.2.0.0_odp/tomahawk). Each test runner uses default known-bad and unsupported test files for the lookup, but these can be overridden with --known-bad-tests-file and --unsupported-tests-file if needed. Available keys can be found in:
    • For SAI Agent tests: ./share/sai_hw_unsupported_tests/sai_hw_unsupported_tests.materialized_JSON
    • For SAI tests: ./share/sai_hw_unsupported_tests/sai_hw_unsupported_tests.materialized_JSON
    • For QSFP tests: ./share/qsfp_unsupported_tests/fboss_qsfp_unsupported_tests.materialized_JSON
    • For Link tests: ./share/link_known_bad_tests/agent_ensemble_link_known_bad_tests.materialized_JSON

T0 Tests

Platform Services

  • all tests in platform_hw_test
  • all tests in data_corral_service_hw_test
  • all tests in fan_service_hw_test
  • all tests in fw_util_hw_test
  • all tests in platform_manager_hw_test
  • all tests in sensor_service_hw_test
  • all tests in weutil_hw_test

run_test.py:

./bin/run_test.py platform

Agent HW Tests

run_test.py:

./bin/run_test.py sai_agent \
--filter_file=./share/hw_sanity_tests/t0_agent_hw_tests.conf \
--config ./share/hw_test_configs/$CONFIG \
--enable-production-features $ASIC \
--skip-known-bad-tests $KEY
# Vlan
*Vlan* t s
# L2 Learning (scale-out only: ESUN 6.2.a static MAC)
*MacLearning* t
*MacSwLearning* t
# Neighbor Resolution
*Neighbor* t s
# L3 Routing
*L3* t s
# Control Plane
*Copp* t s
*PacketSend* t s
*RxReason* t s
*PacketFlood* t
# Queuing
*SendPacketToQueue* t s
*DscpQueueMapping* t s
*PortBandwidth* t s
# Prbs
*Prbs* t s
# Scale-up: init sanity (scale-up only — not in traditional T0)
*AgentEmpty* s

SAI Tests

run_test.py:

./bin/run_test.py sai \
--filter_file=./share/hw_sanity_tests/t0_sai_tests.conf \
--config ./share/hw_test_configs/$CONFIG \
--skip-known-bad-tests $KEY
*Empty*             t s
*HwRoute* t s
*PROFILE* t s
*Vlan* t s
*NextHopGroup* t s
*PortAdminState* t s

QSFP HW Tests

run_test.py:

./bin/run_test.py qsfp \
--filter_file=./share/hw_sanity_tests/t0_qsfp_hw_tests.conf \
--qsfp-config ./share/qsfp_test_configs/$CONFIG \
--skip-known-bad-tests $KEY
# All tests listed below should pass for qualifying a new transceiver.
# Verifies all transceivers can be detected and programmed.
EmptyHwTest.CheckInit
# Verifies i2c communication with optic through many i2c reads.
HwTest.i2cStressRead
# Verifies i2c communication with optic through many i2c writes.
HwTest.i2cStressWrite
# Resets each transceiver sequentially to verify its absence during reset and
# presence after release, while ensuring other transceivers remain responsive throughout.
HwTransceiverResetTest.verifyResetControl
# Hard resets all transceivers and makes sure we can detect their presence again.
HwTransceiverResetTest.resetTranscieverAndDetectPresence
# Verifies transceiver programming behavior based on overridden programmed IPhyPortToPortInfo.
HwStateMachineTest.CheckPortsProgrammed
# Switches between page 0x10 and page 0x11 on CMIS modules on all ports and
# ensures that page 0x10 reads back the same every time.
HwTest.cmisPageChange

Use the qsfp hw test list below for any platform that do not support Transceivers.

run_test.py:

./bin/run_test.py qsfp \
--filter_file=./share/hw_sanity_tests/t0_qsfp_hw_tests_without_transceivers.conf \
--qsfp-config ./share/qsfp_test_configs/$CONFIG \
--skip-known-bad-tests $KEY
# All tests listed below should pass for platorm that does not have transceivers.

# Verifies qsfp service can correctly initialize
EmptyHwTest.CheckInit

# Verifies port programming behavior based on overridden programmed IPhyPortToPortInfo.
HwStateMachineTest.CheckPortsProgrammed

# Only run the following test on platforms that have a external xphy (retimer/gearbox)

# Verifies that the XPHY firmware version is correct.
HwXphyFirmwareTest.CheckDefaultXphyFirmwareVersion

# Verifies mdio communication with xphy through many reads.
HwTest.mdioStressRead

# Verifies mdio communication with xphy through many writes.
HwTest.mdioStressWrite

run_test.py:

./bin/run_test.py link \
--agent-run-mode mono \
--filter_file ./share/hw_sanity_tests/t0_ensemble_link_tests.conf \
--config ./share/link_test_configs/$CONFIG \
--qsfp-config /opt/fboss/share/qsfp_test_configs/$QSFP_CONFIG \
--known-bad-tests-file ./share/link_known_bad_tests/agent_ensemble_link_known_bad_tests.materialized_JSON \
--skip-known-bad-tests $KEY
# Verifies basic link up behavior.
AgentEnsembleEmptyLinkTest.CheckInit
# Tests that both ends of the link discover themselves successfully over LLDP.
AgentEnsembleLinkTest.trafficRxTx
# Verifies link comes up after flap on ASIC.
AgentEnsembleLinkTest.asicLinkFlap
# Verifies all transceiver information is up to date in FBOSS agent.
AgentEnsembleLinkTest.getTransceivers
# Verifies internal PHY stats receive routine and correct updates.
AgentEnsembleLinkTest.iPhyInfoTest
# Create a L3 data plane flood and then assert that none of the traffic bearing ports lose traffic.
AgentEnsembleLinkSanityTestDataPlaneFlood.warmbootIsHitLess

Use the link test list below for any platform that do not support Transceivers.

run_test.py:

./bin/run_test.py link \
--agent-run-mode mono \
--filter_file ./share/hw_sanity_tests/t0_ensemble_link_tests_without_transceivers.conf \
--config ./share/link_test_configs/$CONFIG \
--qsfp-config /opt/fboss/share/qsfp_test_configs/$QSFP_CONFIG \
--known-bad-tests-file ./share/link_known_bad_tests/agent_ensemble_link_known_bad_tests.materialized_JSON \
--skip-known-bad-tests $KEY
# Verifies basic link up behavior.
AgentEnsembleEmptyLinkTest.CheckInit
# Tests that both ends of the link discover themselves successfully over LLDP.
AgentEnsembleLinkTest.trafficRxTx
# Verifies link comes up after flap on ASIC.
AgentEnsembleLinkTest.asicLinkFlap
# Verifies internal PHY stats receive routine and correct updates.
AgentEnsembleLinkTest.iPhyInfoTest
# Create a L3 data plane flood and then assert that none of the traffic bearing ports lose traffic.
AgentEnsembleLinkSanityTestDataPlaneFlood.warmbootIsHitLess

BSP Tests

  • All BSP tests are T0

T1 Tests

Agent HW Tests

run_test.py:

./bin/run_test.py sai_agent \
--filter_file=./share/hw_sanity_tests/t1_agent_hw_tests.conf \
--config ./share/hw_test_configs/$CONFIG \
--enable-production-features $ASIC \
--skip-known-bad-tests $KEY
*Acl*               t s
*DscpMarking* t s
*Pfc* t s
*Qos* t
*Aqm* t s
*Ecmp* t s
*LoadBalancer* t s
*HwUdfTest* t s
*QueuePerHost* t
*IngressBuffer* t s
# Scale-up: NetworkAI QoS (scale-up only — replaces Olympic *Qos*)
*NetworkAIQos* s

QSFP HW Tests

note

T1 QSFP HW tests scope is controlled by the known-bad and unsupported test files, so do not need a dedicated test list via --filter_file.

run_test.py:

./bin/run_test.py qsfp \
--qsfp-config ./share/qsfp_test_configs/$CONFIG \
--skip-known-bad-tests $KEY
note

T1 Link tests scope is controlled by the known-bad and unsupported test files, so do not need a dedicated test list via --filter_file.

run_test.py:

./bin/run_test.py link \
--agent-run-mode mono \
--config ./share/link_test_configs/$CONFIG \
--qsfp-config /opt/fboss/share/qsfp_test_configs/$QSFP_CONFIG \
--known-bad-tests-file ./share/link_known_bad_tests/agent_ensemble_link_known_bad_tests.materialized_JSON \
--skip-known-bad-tests $KEY

Agent Benchmark Tests

run_test.py:

./bin/run_test.py benchmark \
--filter_file ./share/hw_benchmark_tests/t1_benchmarks.conf
# T1 Agent Benchmark Test Suite

sai_tx_slow_path_rate-sai_impl
sai_rx_slow_path_rate-sai_impl
sai_ecmp_shrink_speed-sai_impl
sai_rib_resolution_speed-sai_impl
sai_stats_collection_speed-sai_impl

SAI Tests

run_test.py:

./bin/run_test.py sai \
--filter_file=./share/hw_sanity_tests/t1_sai_tests.conf \
--config ./share/hw_test_configs/$CONFIG \
--skip-known-bad-tests $KEY
*AclStat*           t s
*RouteStat* t s
*AclTable* t s
*HwInPause* t s

T2 Tests

Agent HW Tests

run_test.py:

./bin/run_test.py sai_agent \
--filter_file=./share/hw_sanity_tests/t2_agent_hw_tests.conf \
--config ./share/hw_test_configs/$CONFIG \
--enable-production-features $ASIC \
--skip-known-bad-tests $KEY
*Trunk*             t
*Sflow* t s
*Mirror* t s
*Ptp* t s
*MmuTuning* t s
*ResourceStats* t
# Scale-up: pipeline drop counters (scale-up only)
*EgressForwardingDiscard* s
*InNullRouteDiscard* s
*InTrapDiscard* s

Agent Benchmark Tests

run_test.py:

./bin/run_test.py benchmark \
--filter_file ./share/hw_benchmark_tests/t2_benchmarks.conf
# T2 Agent Benchmark Test Suite

sai_fsw_scale_route_add_speed-sai_impl
sai_hgrid_du_scale_route_add_speed-sai_impl
sai_th_alpm_scale_route_add_speed-sai_impl
sai_fsw_scale_route_del_speed-sai_impl
sai_ecmp_shrink_with_competing_route_updates_speed-sai_impl
sai_th_alpm_scale_route_del_speed-sai_impl
sai_hgrid_du_scale_route_del_speed-sai_impl
sai_init_and_exit_40Gx10G-sai_impl
sai_init_and_exit_100Gx10G-sai_impl
sai_init_and_exit_100Gx25G-sai_impl
sai_init_and_exit_100Gx50G-sai_impl
sai_init_and_exit_100Gx100G-sai_impl
sai_switch_reachability_change_speed-sai_impl

SAI Tests

run_test.py:

./bin/run_test.py sai \
--filter_file=./share/hw_sanity_tests/t2_sai_tests.conf \
--config ./share/hw_test_configs/$CONFIG \
--skip-known-bad-tests $KEY
*AlpmStress*        t
*ArsFlowlet* t
*ArsSpray* t
*EcmpTrunk* t
*Hash* t s
*LoadBalancer* t s
*PacketSendReceiveLag* t
*ParityError* t s
*PortStress* t s
*ProdInvariantsFswStrictPriority* t
*PtpTc* t s
*Sflow* t s
*SplitAgentCallback* t s
*SwitchStateReplay* t s
*Trunk* t
*QPHRollback* t
*Rollback* t s

Scale-Up Tests

Scale-up specific tests belonging to each T0/T1/T2 tier can be run by adding --profile=s to the standard run_test.py command.

Agent HW Tests

./bin/run_test.py sai_agent \
--filter_file=./share/hw_sanity_tests/t0_agent_hw_tests.conf \
--profile=s \
--config ./share/hw_test_configs/$CONFIG \
--enable-production-features \
--production-features ./share/production_features/asic_production_features.materialized_JSON \
--known-bad-tests-file ./share/hw_known_bad_tests/sai_agent_known_bad_tests.materialized_JSON \
--unsupported-tests-file $UNSUPPORTED_TESTS \
--asic $ASIC \
--skip-known-bad-tests $KEY

Replace t0 with t1 or t2 to run the corresponding tier.