Last Updated On : 13-Jan-2026


Fortinet FCSS - Public Cloud Security 7.6 Architect - FCSS_CDS_AR-7.6 Practice Questions

Total 39 Questions



The smartest way to prepare for your Fortinet FCSS_CDS_AR-7.6 exam isn't just reading—it's practicing. Our Fortinet FCSS - Public Cloud Security 7.6 Architect practice test bridge gap, transforming your knowledge into a passing score. Familiarize yourself with the exact style and difficulty of the real Fortinet FCSS_CDS_AR-7.6 practice questions, so there are no surprises. Get detailed feedback to identify your strengths and target your weaknesses, making your study time more efficient.

You must add an Amazon Web Services (AWS) network access list (NACL) rule to allow SSH traffic to a subnet for temporary testing purposes. When you review the current inbound and outbound NACL rules, you notice that the rules with number 5 deny SSH and telnet traffic to the subnet.
What can you do to allow SSH traffic?



A. You do not have to create any NACL rules because the default security group rule automatically allows SSH traffic to the subnet.


B. You must create a new allow SSH rule anywhere in the network ACL rule base to allow SSH traffic.


C. You must create two new allow SSH rules, each with a number bigger than 5.


D. You must create two new allow SSH rules, each with a number smaller than 5.





D.
  You must create two new allow SSH rules, each with a number smaller than 5.

Explanation

An AWS Network Access Control List (NACL) is a stateless, ordered set of rules that acts as a firewall for a subnet. Rules are processed sequentially, starting with the lowest rule number, and the list stops processing once a match (either ALLOW or DENY) is found. Since rule number 5 is explicitly denying SSH traffic, any new rule intended to permit SSH must be placed with a lower rule number to ensure it is evaluated and applied before the traffic is blocked by rule 5.

✅ Correct Option: D
You must create two new ALLOW SSH rules, each with a number smaller than 5 (e.g., rule 1 and rule 2). NACLs require both an Inbound and an Outbound rule to permit two-way, stateless traffic. Assigning a number lower than the existing DENY rule (number 5) gives the new ALLOW rules precedence in the processing order. This ensures the SSH traffic matches the permit rule and is accepted before the list reaches the explicit deny, thereby enabling temporary testing.

❌ Incorrect Option: A
This is incorrect because Security Groups and NACLs serve distinct roles. A Security Group is stateful and operates at the instance level, while an NACL is stateless and operates at the subnet level. If the NACL has an explicit DENY rule (like rule 5), it will block the traffic at the subnet boundary regardless of what the stateful Security Group might allow. Therefore, modifying the NACL is essential for allowing the SSH traffic.

❌ Incorrect Option: B
This is incorrect because the location (i.e., the rule number) is the most critical factor in NACL rule evaluation. NACLs are processed in strict ascending order, and the first matching rule is executed. Creating an ALLOW rule anywhere else, particularly with a number greater than 5, would result in the existing explicit DENY rule (number 5) being matched and applied first, effectively blocking the SSH traffic before the new allow rule is even checked.

❌ Incorrect Option: C
This is incorrect because setting the new ALLOW rules with a number bigger than 5 will not allow SSH traffic. NACL evaluation starts at the lowest number. Rule 5, which explicitly DENIES SSH traffic, will be processed before any rule with a number greater than 5. Since NACL processing stops at the first match, the traffic will be denied by rule 5 and will never reach the higher-numbered ALLOW rules you intended to add.

Summary
AWS NACLs are stateless and process rules numerically from low to high, stopping at the first match. To bypass the explicit DENY rule at number 5, new ALLOW rules must be inserted with a rule number having higher precedence (a lower number) than the blocking rule. This requires two new rules—one for inbound and one for outbound SSH traffic—with numbers smaller than 5.

Reference:
Control subnet traffic with network access control lists

An AWS administrator must ensure that each member of the cloud deployment team has the correct permissions to deploy and manage resources using CloudFormation. The administrator is researching which tasks must be executed with CloudFormation and therefore require CloudFormation permissions.
Which task is run using CloudFormation?



A. Deploying a new pod with a service in an Elastic Kubernetes Service (EKS) cluster using the kubectl command


B. Installing a Helm chart to deploy a FortiWeb ingress controller in an EKS cluster


C. Creating an EKS cluster with the eksctl create cluster command


D. Changing the number of nodes in a EKS cluster from AWS CloudShell





D.
  Changing the number of nodes in a EKS cluster from AWS CloudShell

Explanation

This question tests your ability to identify tasks that are executed using AWS CloudFormation, AWS's infrastructure-as-code service. The correct task is one that leverages CloudFormation to provision or update AWS resources through its declarative template system. This involves creating a stack or updating it with a new template, not using other AWS CLIs, Kubernetes tools, or interactive consoles. The core concept is understanding that CloudFormation is about modeling and managing AWS resources via templates .

📗 Correct Option

D. Changing the number of nodes in an EKS cluster from AWS CloudShell
This is the correct answer because modifying a CloudFormation stack's template to change a property like DesiredSize for a node group's ScalingConfig is a primary use case for CloudFormation. This action updates the underlying Auto Scaling group and is precisely how you manage such infrastructure changes through CloudFormation's resource definitions . AWS CloudShell is just the environment; the action is performed by updating a CloudFormation stack.

📕 Incorrect Options

A. Deploying a new pod with a service in an Elastic Kubernetes Service (EKS) cluster using the kubectl command
This is incorrect because kubectl is the native Kubernetes command-line tool for interacting with a cluster's API. It operates on Kubernetes resources (like Pods and Services) after the cluster infrastructure is already provisioned. This is outside the scope of CloudFormation, which manages the AWS resources that host the cluster, not the software workloads inside it.

B. Installing a Helm chart to deploy a FortiWeb ingress controller in an EKS cluster
This is incorrect. While CloudFormation can technically be used to deploy Helm charts via custom resources , the described task specifically mentions using a Helm chart. Helm is a separate package manager for Kubernetes that runs commands like helm install. The task's execution method is Helm, not CloudFormation, making it an incorrect choice for this question.

C. Creating an EKS cluster with the eksctl create cluster command
This is incorrect. The eksctl command is a dedicated CLI tool for creating and managing EKS clusters. While it may use CloudFormation under the hood, the task is explicitly run using the eksctl command. The question asks for a task "run using CloudFormation," meaning the user's direct action is with CloudFormation, not a different tool that abstracts it away.

Summary
To determine if a task uses CloudFormation, check if it directly involves creating, updating, or deleting a CloudFormation stack using a template. The correct answer involves modifying a stack property (DesiredSize), a core CloudFormation operation. Incorrect answers use other tools like kubectl, helm, or eksctl.

🔗 Reference
AWS::EKS::Nodegroup ScalingConfig
Create Amazon EKS resources with AWS CloudFormation

Which statement about Transit Gateway (TGW) in Amazon Web Services (AWS) is true?



A. Both the TGW attachment and propagation must be in the same TGW route table.


B. TGW can have multiple TGW route tables.


C. A TGW attachment can be associated with multiple TGW route tables.


D. The TGW default route table cannot be disabled.





B.
  TGW can have multiple TGW route tables.

Explanation

AWS Transit Gateway supports multiple route tables, allowing you to segment and control routing across different VPCs or networks. Each attachment is associated with one route table, while route propagation can be enabled on others. This flexibility helps create isolated environments, enforce security boundaries, and manage routing behavior more precisely in complex AWS deployments.

🟩 Correct Option (B)
TGW can maintain several route tables, enabling different attachments to use different routing domains. This setup helps businesses separate environments such as production, testing, and development. It also supports more advanced routing strategies, preventing unnecessary traffic exposure and giving administrators better control over connectivity.

🟥 Incorrect Options

A — Incorrect
Association and propagation do not need to occur within the same TGW route table. An attachment may be associated with one table while its propagated routes can be directed to another. This separation is intentional and gives Transit Gateway flexible routing behavior.

C — Incorrect
A TGW attachment can only be associated with a single route table. Although propagation can occur across multiple route tables, the association itself is limited to one. This design ensures predictable and consistent routing decisions.

D — Incorrect
The default TGW route table can be disabled. AWS allows turning off automatic association or automatic propagation to the default table, preventing unintended routing between VPCs that are attached to the Transit Gateway.

Summary
Transit Gateway supports multiple route tables for routing segmentation. Each attachment associates with only one table but can propagate routes to others. The default route table is optional and can be turned off.

Reference
AWS Transit Gateway Routing – Documented in official Fortinet AWS integration and deployment guides.

You areusing Ansible to modify the configuration ofseveral FortiGate VMs.
What is the minimum number of files you need to creat,and in which file should you configure the target FortiGate IP addresses?



A. One playbook file for each target and the requiredtasks,and one inventory file.


B. One .yaml file with the targets IP addresses, and oneplaybook Tile with the tasks.


C. One inventory file for each target device, and one playbook file.


D. One text lite for all target devices, and one playbook file.





C.
  One inventory file for each target device, and one playbook file.

Explanation

When using Ansible to manage multiple FortiGate VMs, the standard and most efficient setup requires only two files: an inventory file that lists all target FortiGate IP addresses (and connection details) and a single playbook file that contains the configuration tasks. This design enables one playbook to run against any number of devices without duplication.

Correct Option: ✅ C. One inventory file for each target device, and one playbook file.
This is the recommended Ansible structure. One inventory file (typically in INI or YAML format) holds all FortiGate IP addresses, groups, and variables. One playbook defines the tasks once and applies them to any or all devices listed in that inventory — scalable, maintainable, and fully aligned with Fortinet’s own automation examples.

Incorrect Option: ❌ A. One playbook file for each target and the required tasks, and one inventory file.
Creating a separate playbook per FortiGate defeats the entire purpose of automation. It leads to repeated code, difficult updates, and zero scalability — completely against Ansible best practices and Fortinet automation guidelines.

Incorrect Option: ❌ B. One .yaml file with the targets IP addresses, and one playbook file with the tasks.
Ansible strictly separates hosts from tasks. Target IP addresses must live in a dedicated inventory file, not inside the playbook or any random .yaml file. Mixing them breaks host selection, grouping, and variable inheritance.

Incorrect Option: ❌ D. One text file for all target devices, and one playbook file.
A plain text file has no structure Ansible can parse. Inventory files must follow strict INI or YAML syntax so Ansible can correctly identify hosts and connection parameters.

Summary
Only two files are needed: one inventory + one playbook. All FortiGate IP addresses must be configured in the inventory file. Option C is the only correct and officially supported method.

Reference:
Fortinet Docs – Automating FortiGate with Ansible

You have deployed a FortiGate HA cluster in Azure using a gateway load balancer for traffic inspection. However, traffic is not being routed correctly through the firewalls.
What can be the cause of the issue?



A. The FortiNet VMs have IP forwarding disabled, which is required for traffic inspection.


B. The health probes for the gateway load balancer are failing, which causes traffic to bypass the HA cluster.


C. The gateway load balancer is not associated with the correct network security group (NSG) rules, which allow traffic to pass through.


D. The protected VMs are in a different Azure subscription, which prevents the gateway load balancer from forwarding traffic.





B.
  The health probes for the gateway load balancer are failing, which causes traffic to bypass the HA cluster.

Explanation

In an Azure FortiGate HA cluster, a gateway load balancer directs traffic to a healthy, active firewall node. Its operation is entirely dependent on successful health probes. If all FortiGate VMs fail to respond to these probes, the load balancer has no valid backend target. Consequently, it stops forwarding traffic entirely, causing the inspection cluster to be bypassed and breaking the intended traffic flow.

✅ Correct Option

B. The health probes for the gateway load balancer are failing, which causes traffic to bypass the HA cluster.
This is correct. The health probe is the load balancer's mechanism to determine which backend instance (FortiGate VM) is operational. If probes fail on all cluster members, the load balancer's backend pool is considered empty. The load balancer will then drop traffic, as it has no healthy target to forward it to, effectively bypassing the firewalls. The official Fortinet deployment guide for Azure confirms that "Azure Load Balancer (LB) handles traffic failover using a health probe."

❌ Incorrect Options

A. The FortiGate VMs have IP forwarding disabled, which is required for traffic inspection.
Incorrect. While IP forwarding must be enabled on the VM's network interface in Azure for it to route traffic, this is a fundamental and typically correct deployment setting. A complete traffic stop is more directly caused by the load balancer's traffic steering mechanism failing, not this underlying feature.

C. The gateway load balancer is not associated with the correct network security group (NSG) rules, which allow traffic to pass through.
Incorrect. Network Security Groups (NSGs) act as firewalls. An NSG blocking the health probe port could cause the probe failure, but the root issue is the probe failure itself. The load balancer does not use NSG rules to "allow traffic to pass through"; it uses them for access control.

D. The protected VMs are in a different Azure subscription, which prevents the gateway load balancer from forwarding traffic.
Incorrect. A gateway load balancer's backend pool (the FortiGate VMs) must be in the same subscription as the load balancer itself. Protected VMs in a different subscription would be unreachable due to network architecture (like missing VNet peering), not a direct limitation of the load balancer's forwarding capability.

Summary
For a gateway load balancer to route traffic through a FortiGate HA cluster, its health probes must succeed. Failed probes mean the load balancer has no operational firewall to send traffic to, causing a complete bypass.

🔗 Reference
The critical role of the health probe in this architecture is defined in the official Fortinet Public Cloud 7.6.0 Administration Guide for Azure, in the section detailing the FortiGate HA cluster deployment with an Azure Load Balancer.

An administrator is trying to implement FortiCNP with Microsoft Azure Security integration. However, FortiCNP is not able to extract any cloud integration data from Azure; therefore, real-time cloud security monitoring is not possible.
What is causing this issue?



A. The organization is using a free Azure AD license.


B. The Azure account doesn't have the global administrator role.


C. The administrator enabled the wrong defender plan for servers.


D. The FortiCNP account in Azure has the Storage Blob Data Reader role.





D.
  The FortiCNP account in Azure has the Storage Blob Data Reader role.

Explanation

FortiCNP must access security findings and configurations from various Azure services (like Defender for Cloud). The Storage Blob Data Reader role is severely restrictive, granting access only to storage contents. This narrow scope prevents FortiCNP from gathering the necessary metadata and resource inventory across the subscription, which is essential for comprehensive, real-time security monitoring and cloud security posture management (CSPM).

✅ Correct Option: The FortiCNP account in Azure has the Storage Blob Data Reader role.
This role provides minimal access, specifically for reading storage blobs only. ✅ FortiCNP needs a broad permission set, such as the Reader role at the subscription level, to successfully query configurations, security findings, and inventory from all necessary Azure services. This overly restrictive role prevents the required data ingestion.

❌ Incorrect Option: The organization is using a free Azure AD license.
The failure is caused by a restrictive Azure Resource Manager (ARM) role on the subscription resources, not the Azure AD license tier. ❌ FortiCNP's fundamental ability to connect and retrieve basic security data is governed by the assigned access role, which provides resource permissions, not by the specific Entra ID license used.

❌ Incorrect Option: The administrator enabled the wrong defender plan for servers.
An incorrect Defender plan would result in limited quality or fewer security findings. ❌ However, if FortiCNP had the correct access role, it would still be able to successfully connect and extract some data. A complete inability to extract any data indicates a foundational failure in the access role required to perform queries across services.

❌ Incorrect Option: The Azure account doesn't have the global administrator role.
The Global Administrator role is necessary only for the initial onboarding and setup of the service principal. ❌ For regular, daily data extraction, a less-privileged role like Reader is sufficient and preferred. The issue is having an explicitly restrictive role, not the absence of the highest administrative privilege.

Summary
The complete inability of FortiCNP to extract cloud security data is due to its service principal being assigned the overly restrictive Storage Blob Data Reader role. This role only grants access to storage objects and is insufficient for retrieving the subscription-wide security findings and configuration metadata required from the Microsoft Defender for Cloud service for accurate, real-time security monitoring.

Reference
Fortinet Document Library: FortiCNP official documentation specifies the required Azure role permissions (e.g., Reader or equivalent custom roles) for successful integration and data ingestion.
Search for: FortiCNP Azure subscription role requirements.

An administrator is relying on an Azure Bicep linter to find possible issues in Bicep files.
Which problem can the administrator expect to find?



A. The resources to be deployed exceed the quota for a region.


B. Some resources are missing dependsOn statements.


C. There are output statements that contain passwords.


D. One or more modules are not using runtime values as parameters.





B.
  Some resources are missing dependsOn statements.

Explanation

The Azure Bicep linter is a static analysis tool that reviews Bicep files before deployment to detect potential syntax errors, structural issues, and best-practice violations. One key function of the linter is to identify missing dependsOn statements, which indicate that the deployment order of resources may be ambiguous. Properly specifying dependencies ensures that resources are created in the correct sequence and prevents deployment errors caused by unsatisfied dependencies.

🟩 Correct Option (B)
The linter can detect when resources do not explicitly define dependsOn relationships. Without these, Azure might attempt to deploy resources in the wrong order, causing runtime failures. Adding dependsOn statements clarifies dependencies and ensures that, for example, a virtual machine is deployed only after its network interface and storage account exist. This helps maintain predictable and reliable deployments.

🟥 Incorrect Options

A — Incorrect
The linter does not check Azure quotas for a subscription or region. Resource limits, such as the number of VMs allowed in a region, are only evaluated at deployment time. These runtime issues cannot be predicted from static Bicep code, so relying on the linter will not reveal them.

C — Incorrect
While exposing secrets in outputs is a security concern, the Bicep linter primarily checks syntax, structure, and dependencies. It does not automatically flag output statements containing sensitive information like passwords. Developers must use secure parameters or Key Vault references to manage secrets safely.

D — Incorrect
Modules that use runtime values as parameters are not inherently a linter issue. The linter ensures that module syntax and references are correct, but it does not enforce how parameters are provided. Runtime value usage is allowed as long as it does not break dependencies or violate syntax rules.

Summary
Azure Bicep linter is a pre-deployment tool that helps ensure resource definitions are correct and follow best practices. Its primary value is detecting missing dependencies, syntax errors, and structural issues, not runtime conditions like quotas or secret exposure. Proper use of dependsOn statements ensures resources deploy in the intended order and prevents deployment failures.

Reference
Azure Bicep Linter Documentation

Page 1 out of 6 Pages

Why Prepare with PrepForti FCSS_CDS_AR-7.6 Practice Test?

Choosing the right preparation material is critical for passing the Fortinet FCSS - Public Cloud Security 7.6 Architect exam. Here’s how our FCSS_CDS_AR-7.6 practice test is designed to bridge the gap between knowledge and a passing score.

Experience the Real Exam Format:


Familiarize yourself with the exact style, difficulty, and question types you will encounter on the official Fortinet exam. Our FCSS - Public Cloud Security 7.6 Architect FCSS_CDS_AR-7.6 test questions, like the samples on this page, cover specific technical scenarios and MCQs to ensure there are no surprises on test day.

Turn Knowledge into Application:


The smartest way to prepare isn't just reading - it's practicing. Our FCSS - Public Cloud Security 7.6 Architect practice test questions transforms your theoretical understanding into practical problem-solving skills, exactly what is required to pass.

Learn with Detailed Explanations:


All FCSS_CDS_AR-7.6 exam questions comes with a comprehensive summary and a breakdown of why the correct option is right and the others are wrong. This detailed feedback helps you identify your strengths and target your weaknesses, making your FCSS - Public Cloud Security 7.6 Architect study time far more efficient.



Experience the Real Exam Now!

How to Pass the Fortinet FCSS Public Cloud Security 7.6 Architect Exam on the First Attempt


The FCSS_CDS_AR-7.6 – FCSS Public Cloud Security 7.6 Architect exam validates your ability to design, deploy, and manage Fortinet cloud security architectures across multi-cloud environments. Candidates must demonstrate both technical depth and architectural decision-making across AWS, Azure, and Google Cloud.

Understand the Exam Blueprint


The exam (FCSS_CDS_AR-7.6) typically contains 60 multiple-choice questions. You will have 105 minutes (1 hour, 45 minutes) to complete it. Focus your studies on these core areas:

1. Cloud Security Architecture & Design
2. Fortinet Security Fabric Integration Across Clouds
3. Advanced FortiGate Deployment Models
4. Secure Connectivity & Zero Trust Implementations
5. Automation, Orchestration, and Infrastructure-as-Code
6. Monitoring, Logging & Incident Response in Cloud Environments
7. Best Practices for Scalability, Availability & Compliance

How to Prepare Effectively


Start by reviewing Fortinet official training material and gaining hands-on experience with FortiGate, FortiManager, and cloud-native services. Build real multi-cloud lab scenarios—this helps reinforce core architectural principles and gives you the confidence to solve exam-style challenges.

Why Fortinet FCSS Public Cloud Security 7.6 Architect Practice Tests Matter


A major contributor to first-attempt success is familiarity with the exam style. High-quality FCSS Public Cloud Security 7.6 Architect practice questions sharpen your reasoning, highlight weak areas, and improve test-taking speed. Fortinet FCSS_CDS_AR-7.6 Practice exam from PrepForti.com closely mirror the difficulty and structure of the real exam, helping candidates approach test day with confidence and clarity.

Final Tips

Create a structured study plan, practice consistently, and focus on understanding why a solution works—not just what to configure. With the right preparation and realistic Fortinet FCSS Public Cloud Security 7.6 Architect practice test, passing the Fortinet exam on your first attempt is absolutely achievable.

Fortinet FCSS - Public Cloud Security 7.6 Architect Practice Exam Questions