Skip to main content

NETWORK BASICS

Network A system of interconnected computers and computerized peripherals such as printers is called computer network. This interconnection among computers facilitates information sharing among them. Computers may connect to each other by either wired or wireless media. A computer network consists of a collection of computers, printers and other equipment that is connected together so that they can communicate with each other.  


Network application
A Network application is any application running on one host and provides a communication to another application running on a different host, the application may use an existing application layer protocols such as: HTTP(e.g. the Browser and web server), SMTP(e.g. the email-client). And may be the application does not use any existing protocols and depends on the socket programming to communicate to another application. So the web application is a type of the network applications. 
There are lots of advantages from build up a network, but the th…

Evolution of intelligent networks



Image result for Evolution of intelligent networks"
a, Describe the effects of cloud resources on enterprise network architecture
b, Traffic path to internal and external cloud services Virtual services
c, Basic virtual network infrastructure
d, Describe basic QoS concepts Marking Device trust
Prioritization



(i) Voice 4.3.c.
(ii) Video 4.3.c.
(iii) Data

4.3.d Shaping
4.3.e Policing
4.3.f Congestion management
4.5 Verify ACLs using the APIC-EM Path Trace ACL analysis tool
5.5 Describe network programmability in enterprise network architecture
5.5.a Function of a controller
5.5.b Separation of control plane and data plane
5.5.c Northbound and southbound APIs

This all-new chapter is totally focused on the CCNA objectives for intelligent networks. I’ll start by covering switch stacking using Stack-Wise and then move on to discuss the important realm of cloud computing and its effect on the enterprise network.
I’m going to stick really close to the objectives on the more difficult subjects to help you tune in specifically to the content that’s important for the CCNA, including the following: Software Defined Networking (SDN), application programming interfaces (APIs), Cisco’s Application Policy
Infrastructure Controller Enterprise Module (APIC-EM), Intelligent WAN, and finally, quality of service (QoS). While it’s good to understand the cloud and SDN because they’re certainly objectives for the CCNA, just know that they aren’t as critical for the objectives as the QoS section found in this chapter.
In this chapter, I really only have the space to introduce the concepts of network programmability and SDN because the topic is simply too large in scope. Plus, this chapter is already super challenging because it’s a foundational chapter containing no configurations. Just remember that I’m going to spotlight the objectives in this chapter to make this chapter and its vital content as potent but painless as possible! We’ll check off every exam objective by the time we’re through.


It’s hard to believe that Cisco is using switch stacking to start their “Evolution of Intelligent Networks” objectives because switch stacking has been around since the word cloud meant 420 in my home town of Boulder, but I digress.

A typical access closet contains access switches placed next to each other in the same rack and uses high-speed redundant links with copper, or more typically fiber, to the distribution layer switches.

Here are three big drawbacks to a typical switch topology:
  • Management overhead is high.
  • STP will block half of the up-links.
  • There is no direct communication between switches.
Cisco Stack-Wise technology connects switches that are mounted in the same rack together so they basically become one larger switch. By doing this, you can incrementally add more access ports for each closet while avoiding the cost of upgrading to a bigger switch. So you’re adding ports as you grow your company instead of front loading the investment into a pricier, larger switch all at once. And since these stacks are managed as a single unit, it reduces the management in your network. All switches in a stack share configuration and routing information, so you can easily add or remove switches at any time without disrupting your network or affecting its performance.

To create a Stack-Wise unit, you combine individual switches into a single, logical unit using special stack interconnect cables as shown in This creates a bidirectional, closed-loop path in the stack. Here are some other features of Stack-Wise:
  • Any changes to the network topology or routing information are updated continuously through the stack interconnect.
  • A master switch manages the stack as a single unit. The master switch is elected from one of the stack member switches.
  • You can join up to nine separate switches in a stack.
  • Each stack of switches has only a single IP address, and the stack is managed as a single object. You’ll use this single IP address for managing the stack, including fault detection, VLAN database updates, security, and QoS controls. Each stack has only one configuration file, which is distributed to each switch in the Stack-Wise. 
  • Using Cisco Stack-Wise will produce some management overhead, but at the same time, multiple switches in a stack can create an Ether Channel connection, eliminating the need for STP.
These are the benefits to using Stack-Wise technology, specifically mapped to the CCNA objectives to memorize:
  • Stack-Wise provides a method to join multiple physical switches into a single logical switching unit.
  • Switches are united by special interconnect cables.
  • The master switch is elected
  • The stack is managed as a single object and has a single management IP address.
  • Management overhead is reduced.
  • STP is no longer needed if you use Ether-channel.
  • Up to nine switches can be in a Stack-Wise unit.
One more very cool thing…When you add a new switch to the stack, the master switch automatically configures the unit with the currently running IOS image as well as the configuration of the stack. So you don’t have to do anything to bring up the switch before its ready to operate…nice!


Cloud computing is by far one of the hottest topics in today’s IT world. Basically, cloud computing can provide virtualized processing, storage, and computing resources to users remotely, making the resources transparently available regardless of the user connection. To put it simply, some people just refer to the cloud as “someone else’s hard drive.” This is true, of course, but the cloud is much more than just storage. The history of the consolidation and virtualization of our servers tells us that this has become the de facto way of implementing servers because of basic resource efficiency. Two physical servers will use twice the amount of electricity as one server, but through virtualization, one physical server can host two virtual machines, hence the main thrust toward virtualization. With it, network components can simply be shared more efficiently. Users connecting to a cloud provider’s network, whether it be for storage or applications, really don’t care about the underlying infrastructure because as computing becomes a service rather than a product, it’s then considered an on-demand resource,

Centralization/consolidation of resources, automation of services, virtualization, and standardization are just a few of the big benefits cloud services offer.

Cloud computing has several advantages over the traditional use of computer resources. Following are advantages to the provider and to the cloud user.

Here are the advantages to a cloud service builder or provider:
  • Cost reduction, standardization, and automation
  • High utilization through virtualized, shared resources 
  • Easier administration
  • Fall-in-place operations model
  • On-demand, self-service resource provisioning
  • Fast deployment cycles
  • Cost effective
  • Centralized appearance of resources
  • Highly available, horizontally scaled application architectures
  • No local backups
Having centralized resources is critical for today’s workforce. For example, if you have your documents stored locally on your laptop and your laptop gets stolen, you’re pretty much screwed unless you’re doing constant local backups. That is so 2005!

After I lost my laptop and all the files for the book I was writing at the time, I swore (yes, I did that too) to never have my files stored locally again. I started using only Google Drive, One Drive, and Dropbox for all my files, and they became my best backup friends. If I lose my laptop now, I just need to log in from any computer from anywhere to my service provider’s logical drives and presto, I have all my files again. This is clearly a simple example of using cloud computing, specifically SaaS (which is discussed next), and it’s wonderful! So cloud computing provides for the sharing of resources, lower cost operations passed to the cloud consumer, computing scaling, and the ability to dynamically add new servers without going through the procurement and deployment process.


Cloud providers can offer you different available resources based on your needs and budget. You can choose just a vitalized network platform or go all in with the network, OS, and application resources.

You can see that IaaS allows the customer to manage most of the network, whereas SaaS doesn’t allow any management by the customer, and PaaS is somewhere in the middle of the two. Clearly, choices can be cost driven, so the most important thing is that the customer pays only for the services or infrastructure they use.


Infrastructure as a Service (IaaS): Provides only the network. Delivers computer infrastructure—a platform virtualization environment —where the customer has the most control and management capability.

Platform as a Service (PaaS): Provides the operating system and the network. Delivers a computing platform and solution stack, allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an application. An example is Windows Azure.

Software as a Service (SaaS): Provides the required software, operating system, and network. SaaS is common application software such as databases, web servers, and email software that’s hosted by the SaaS vendor. The customer accesses this software over the Internet. Instead of having users install software on their computers or servers, the SaaS vendor owns the software and runs it on computers in
its data center. Microsoft Office 365 and many Amazon Web Services (AWS) offerings are perfect examples of SaaS.

So depending on your business requirements and budget, cloud service providers market a very broad offering of cloud computing products from highly specialized offerings to a large selection of services.

What’s nice here is that you’re is offered a fixed price for each service that you use, which allows you to easily budget wisely for the future. It’s true— at first, you’ll have to spend a little cash on staff training, but with automation you can do more with less staff because administration will be easier and less complex. All of this works to free up the company resources to work on new business requirements and be more agile and innovative in the long run.


Right now in our current, traditional networks, our router and switch ports are the only devices that are not virtualized. So this is what we’re really trying to do here—virtualize our physical ports.

First, understand that our current routers and switches run an operating system, such as Cisco IOS, that provides network functionality. This has worked well for us for 25 years or so, but it is way too cumbersome now to configure, implement, and troubleshoot these autonomous devices in today’s large, complicated networks. Before you even get started, you have to understand the business requirements and then push that out to all the devices. This can take weeks or even months since each device is configured, maintained, and monitored separately. 

Data plane This plane, also referred to as the forwarding plane, is physically responsible for forwarding frames of packets from its ingress to egress interfaces using protocols managed in the control plane. Here, data is received, the destination interface is looked up, and the forwarding of frames and packets happens, so the data plane relies completely on the control plane to provide solid information.

Control plane This plane is responsible for managing and controlling any forwarding table that the data plane uses. For example, routing protocols such as OSPF, EIGRP, RIP, and BGP as well as IPv4 ARP, IPv6 NDP, switch MAC address learning, and STP are all managed by the
control plane.

Now that you understand that there are two planes used to forward traffic in our current or legacy network, let’s take a look at the future of networking.


If you have worked on any enterprise Wi-Fi installations in the last decade, you would have designed your physical network and then configured a type of network controller that managed all the wireless APs in the network. It’s hard to imagine that anyone would install a wireless network today without some type of controller in an enterprise network, where the access points (APs) receive their directions from the controller on how to manage the wireless frames and the APs have no operating system or brains to make many decisions on their own.

The same is now true for our physical router and switch ports, and it’s precisely this centralized management of network frames and packets that Software Defined Networking (SDN) provides to us.

SDN removes the control plane intelligence from the network devices by having a central controller manage the network instead of having a full operating system (Cisco IOS, for example) on the devices. In turn, the controller manages the network by separating the control and data (forwarding) planes, which automates configuration and the remediation of all devices. So instead of the network devices each having individual control planes, we now have a centralized control plane, which consolidates all network operations in the SDN controller. APIs allow for applications to control and configure the network without human intervention. The APIs are another type of configuration interface just like the CLI, SNMP, or GUI interfaces, which facilitate machine-to-machine operations.

The SDN architecture slightly differs from the architecture of traditional networks by adding a third layer, the application plane, 

Data (or forwarding) plane Contains network elements, meaning any physical or virtual device that deals with data traffic. 

Control plane Usually a software solution, the SDN controllers reside here to provide centralized control of the router and switches that populate the data plane, removing the control plane from individual devices.

Application plane This new layer contains the applications that communicate their network requirements toward the controller using APIs.

SDN is pretty cool because your applications tell the network what to do based on business needs instead of you having to do it. Then the controller uses the APIs to pass instructions on to your routers, switches, or other network gear. So instead of taking weeks or months to push out a business requirement, the solution now only takes minutes. 

There are two sets of APIs that SDN uses and they are very different. A you already know, the SDN controller uses APIs to communicate with both the application and data plane. Communication with the data plane is defined with southbound interfaces, while services are offered to the application plane using the northbound interface. Let’s take a deeper look at this oh-so-vital CCNA objective.


Logical southbound interface (SBI) APIs (or device-to-control-plane interfaces) are used for communication between the controllers and network devices. They allow the two devices to communicate so that the controller can program the data plane forwarding tables of your routers and switches.

Since all the network drawings had the network gear below the controller, the APIs that talked to the devices became known as southbound, meaning, “out the southbound interface of the controller.” And don’t forget that with SDN, the term interface is no longer referring to a physical interface! Unlike northbound APIs, southbound APIs have many standards, and you absolutely must know them well for the objectives. Let’s talk about them now:

Open Flow Describes an industry-standard API, which the ONF (opennetworking.org) defines. It configures white label switches, meaning that they are non-proprietary, and as a result defines the flow path through the network. All the configuration is done through NET-CONF.

NET CONF Although not all devices support NET-CONF yet, what this provides is a network management protocol standardized by the IETF. Using RPC, you can install, manipulate, and delete the configuration of
network devices using XML.

NOTE: NET-CONF is a protocol that allows you to modify the configuration of a networking device, but if you want to modify the device’s forwarding table, then the Open Flow protocol is the way to
go.

ONE PK A Cisco proprietary SBI that allows you to inspect or modify the network element configuration without hardware upgrades. This makes life easier for developers by providing software development kits for Java, C, and Python.

Op Flex The name of the southbound API in the Cisco ACI world is Op Flex, an open-standard, distributed control system. Understand that Open-flow first sends detailed and complex instructions to the control plane of the network elements in order to implement a new application policy—something called an imperative SDN model. On the other hand, Op Flex uses a declarative SDN model because the controller, which Cisco calls the APIC, sends a more abstract, “summary policy” to the network elements. The summary policy makes the controller believe that the network elements will implement the required changes using their own control planes, since the devices will use a partially centralized control plane.


To communicate from the SDN controller and the applications running over the network, you’ll use northbound interfaces (NBIs), pictured in By setting up a framework that allows the application to demand the network setup with the configuration that it needs, the NBIs allow your applications to manage and control the network. This is priceless for saving time because you no longer need to adjust and tweak your network to get a service or application running correctly.

The NBI applications include a wide variety of automated network services, from network virtualization and dynamic virtual network provisioning to more granular firewall monitoring, user identity management, and access policy control. This allows for cloud orchestration applications that tie together, for server provisioning, storage, and networking that enables a complete roll-out of new cloud services in minutes instead of weeks!

Sadly, at this writing there is no single northbound interface that you can use for communication between the controller and all applications. So instead, you use various and sundry northbound APIs, with each one working only with a specific set of applications. Most of the time, applications used by NBIs will be on the same system as the APIC controller, so the APIs don’t need to send messages over the network since both programs run on the same system. However, if they don’t reside on the same system, REST (Representational State Transfer) comes into play; it uses HTTP messages to transfer data over the API for applications that sit on different hosts.


Cisco Application Policy Infrastructure Controller Enterprise Module (APIC-EM) is a Cisco SDN controller, which uses the previously mentioned open APIs for policy-based management and security through a single controller, abstracting the network and making network services simpler. APIC-EM provides centralized automation of policy-based application profiles, and the APIC-EM northbound interface is the only API that you’ll need to control your network programmatically. Through this programmability, automated network control helps IT to respond quickly to new business opportunities. The APIC-EM also includes support for greenfield (new installations) and brownfield (current or old installations) deployments, which allows you to implement programmability and automation with the infrastructure that you already have.

APIC-EM is pretty cool, easy to use (that’s up for debate), and automates the tasks that network engineers have been doing for well over 20 years. At first glance this seems like it will replace our current jobs, and in some circumstances, people resistant to change will certainly be replaced. But you don’t have to be one of them if you start planning now.

Cisco APIC-EM’s northbound interface is only an API, but southbound interfaces are implemented with something called a service abstraction layer (SAL), which talks to the network elements via SNMP and CLI. Using the SNMP and the CLI allows APIC-EM to work with legacy Cisco products, and soon APIC-EM will be able to use NET-CONF too.

The network devices can be either physical or virtual, including Nexus data center switches, ASA firewalls, ASR routers, or even third-party load balancers. The managed devices must be specific to ACI; in other words, a special NX-OS or ASR IOS version is required to add the southbound APIs required to communicate with the APIC controller.

The APIC-EM API is REST based, which as you know allows you to discover and control your network using HTTP by using the GET, POST, PUT, and DELETE options along with JSON (JavaScript Object Notation) and XML (eXtensible Markup Language) syntax.

Here are some important features of Cisco APIC-EM that are covered in the CCNA objectives.

On the left of the screen you can see the Discover button. This provides the network information database, which scans the network and provides the inventory, including all network devices; the network devices are also shown in the Device Inventory.

There’s also a network topology visualization, which reveals the physical topology discovered,  This auto discovers and maps network devices to a physical topology with detailed device-level data, including the discovered hosts. And take note of that IWAN button. It provides the provisioning of IWAN network profiles with simple business policies. Plus there is a Path Trace button, which I will talk about next.

But wait… Before I move on to the Path Trace functionality of APIC-EM, let me go over a few more promising features with you. In the APIC-EM is the Zero-Touch Deployment feature, which finds a new device via the controller scanner and automatically configures it. You can track user identities and endpoints by exchanging the information with the Cisco Identity Service Engine (Cisco ISE) via the Identity Manager. You can also quickly set and enforce QoS priority policies with the QoS deployment and change management feature and accelerate ACL management by querying and analyzing ACLs on each network device. This means you can quickly identify ACL misconfiguration using the ACL analysis! And last but not least, using the Policy Manager, the controller translates a business policy into a network-device-level policy. The Policy Manager can enforce the policy for a specific user at various times of the day, across wired and wireless networks.

Now let’s take a look at the very vital path tracing feature of APIC-EM.


An important objective in this intelligent networks chapter is the path tracing ability in the APIC-EM. Matter of fact, I’ve mostly been using the APIC-EM for just this feature in order to help me troubleshoot my virtual networks, and it does it very well. And it looks cool too!

Pushing toward staying tight to the CCNA objectives here, you really want to know that you can use the path trace service of APIC-EM for ACL analysis. Doing this allows you to examine the path that a specific type of packet travels as it makes its way across the network from a source to a destination node, and it can use IP and TCP/UDP ports when diagnosing an application issue. If there is an ACL on an interface that blocks a certain application, you’ll see this in a GUI output. I cannot tell you how
extremely helpful this is in the day-to-day administration of your networks!

The result of a path trace will be a GUI representation of the path that a packet takes across all the devices and links between the source and destination, and you can choose to provide the output of a reverse path as well. Although I could easily do this same work manually, it would certainly be a whole lot more time consuming if I did! APIC-EM’s Path Trace app actually does the work for you with just a few clicks at the user
interface.

Once you fill in the fields for the source, destination, and optionally, the application, the path trace is initiated. You’ll then see the path between hosts, plus the list of each device along the path,

Okay, here you can see the complete path taken from host A to host B. I chose to view the reverse path as well. In this particular case, we weren’t being blocked by an ACL, but if a packet actually was being blocked for a certain application, we’d see the exact interface where the application was
blocked and why. Here is more detail on how my trace occurred.

First, the APIC-EM Discovery finds the network topology. At this point I can now choose the source and destination address of a packet and, alternately, port numbers and the application that can be used. The MAC address table, IP routing tables, and so on are used by the APIC-EM to find the path of the packet through the network. Finally, the GUI will show you the path, complete with device and interface information.

Last point: The APIC-EM is free, and most of the applications off the NBI are built in and included, but there are some solution applications that need a license. So if you have a VM with at least 64 gigs of RAM, you’re set!


This topic was covered in the chapter “Wide Area Networks,” but it’s important to at least touch on it here since it’s included in the “Evolution of Intelligent Networks” CCNA objectives. We’ll also take a peek at the APIC-EM for IWAN in this section. What Cisco’s IWAN solution provides is a way to take advantage of cheaper bandwidth at remote locations, without compromising application performance, availability, or even security and in an easy-to-configure manner—nice!
Clearly, this allows us to use low-cost or inexpensive Internet connections, which have become more reliable and cost effective compared to the dedicated WAN links we used to use, and it means that
we can now take advantage of this low-cost technology with Cisco’s new Cisco Intelligent WAN (Cisco IWAN). Add in the fail-over and redundancy features that IWAN provides and you’ll definitely see the reason large enterprises are deploying Cisco’s IWAN. The downside? Well, nothing to you and me, because we always use Cisco’s gear from end to end in all our networks, right? IWAN can provide great long-distance connections for your organization by dynamically routing based on the application service-level agreement (SLA) while paying attention to network conditions. And it can do this over any type of network!

There are four components of Cisco’s IWAN: Transport-independent connectivity Cisco IWAN provides a type of advanced VPN connection across all available routes to remote locations, providing one network with a single routing domain. This allows you to easily multi-home the network across different types of connections, including MPLS, broadband, and cellular. Intelligent path control By using Cisco Performance Routing (Cisco PfR), Cisco IWAN improves delivery and WAN efficiency of applications. Application optimization Via Cisco’s Application Visibility and Control (Cisco AVC), as well as Cisco’s Wide Area Application Services (Cisco WAAS), you can now optimize application performance over WAN links. Highly secure connectivity Using VPNs, firewalls, network segmentation, and security features, Cisco IWAN helps ensure that these solutions actually provide the security you need over the public Internet. Quality of Service Quality of service (QoS) refers to the way the resources are controlled so that the quality of services is maintained. It’s basically the ability to provide a different priority to one or more types of traffic over other levels for different applications, data flows, or users so that they can be guaranteed a certain performance level. QoS is used to manage contention for network resources for better end-user experience. 


Delay Data can run into congested lines or take a less-than-ideal route to the destination, and delays like these can make some applications, such as VoIP, fail. This is the best reason to implement QoS when real-time applications are in use in the network—to prioritize delay-sensitive traffic.
Dropped Packets Some routers will drop packets if they receive a packet while their buffers are full. If the receiving application is waiting for the packets but doesn’t get them, it will usually request that the packets be re-transmitted—another common cause of a service(s) delay. With QoS, when there is contention on a link, less important traffic is delayed or dropped in favor of delay-sensitive business-important traffic. Error Packets can be corrupted in transit and arrive at the destination in an unacceptable format, again requiring re-transmission and resulting in delays such as video and voice. Jitter Not every packet takes the same route to the destination, so some will be more delayed than others if they travel through a slower or busier network connection. The variation in packet delay is called jitter, and this can have a nastily negative impact on programs that communicate in real
time. Out-of-Order Delivery Out-of-order delivery is also a result of packets taking different paths through the network to their destinations. The application at the receiving end needs to put them back together in the right order for the message to be completed. So if there are significant delays, or the packets are reassembled out of order, users will probably notice degradation of an application’s quality. QoS can ensure that applications with a required level of predictability will receive the necessary bandwidth to work properly. Clearly, on networks with excess bandwidth, this is not a factor, but the more limited your bandwidth is, the more important a concept like this becomes! Traffic Characteristics In today’s networks, you will find a mix of data, voice, and video traffic. Each traffic type has different properties.

Data traffic is not real-time traffic, and includes data packets comprising of bursty (or unpredictable) traffic and widely varying packet arrival times.

  • Smooth/bursty
  • Benign/greedy
  • Drop insensitive
  • Delay insensitive
  • TCP re-transmits
Data traffic doesn’t really require special handling in today’s network, especially if TCP is used. Voice traffic is real-time traffic with constant, predictable bandwidth and known packet arrival times.

The following are voice characteristics on a network:
  • Smooth traffic 
  • Benign
  • Drop insensitive
  • Delay sensate insensitive
  • UDP priority
One-way voice traffic needs the following:
  • Latency of less than or equal to 150 milliseconds
  • Jitter of less than or equal to 30 milliseconds
  • Loss of less than or equal to 1%
  • Bandwidth of only 30–128k Kbps
There are several types of video traffic, and a lot of the traffic on the Internet today is video traffic, with Netflix, Hulu, etc. Video traffic can include streaming video, real-time interactive video, and video conferences.

  • Latency of less than or equal to 200–400 milliseconds
  • Jitter of less than or equal to 30–50 milliseconds
  • Loss of less than or equal to 0.1%–1%
  • Bandwidth of 384 Kbps to 20 Mbps or greater

The trust boundary is a point in the network where packet markings (which identify traffic such as voice, video, or data) are not necessarily trusted. You can create, remove, or rewrite markings at that point. The borders of a trust domain are the network locations where packet markings are accepted and acted upon.

The figure shows that IP phones and router interfaces are typically trusted, but beyond those points are not. Here are some things you need to remember for the exam objectives:
Untrusted domain This is the part of the network that you are not managing, such as PC, printers, etc.

Trusted domain This is part of the network with only administrator managed devices such as switches, routers, etc.

Trust boundary This is where packets are classified and marked. For example, the trust boundary would be IP phones and the boundary between the ISP and enterprise network. In an enterprise campus network, the trust boundary is almost always at the edge switch.

Traffic at the trust boundary is classified and marked before being forwarded to the trusted domain. Markings on traffic coming from an untrusted domain are usually ignored to prevent end-user-controlled markings from taking unfair advantage of the network QoS configuration.


In this section we’ll be covering these important mechanisms: 
  • Classification and marking tools
  • Policing, shaping, and re-marking tools 
  • Congestion management (or scheduling) tools 
  • Link-specific tools
So let’s take a deeper look at each mechanism now.


A classifier is an IOS tool that inspects packets within a field to identify the type of traffic that they are carrying. This is so that QoS can determine which traffic class they belong to and determine how they should be treated. It’s important that this isn’t a constant cycle for traffic because it does take up time and resources. Traffic is then directed to a policy-enforcement mechanism, referred to as policing, for its specific type.

Policy enforcement mechanisms include marking, queuing, policing, and shaping, and there are various layer 2 and layer 3 fields in a frame and packet for marking traffic. You are definitely going to have to understand these marking techniques to meet the objectives, so here we go:

Class of Service (CoS) An Ethernet frame marking at layer 2, which contains 3 bits. This is called the Priority Code Point (PCP) within an Ethernet frame header when VLAN tagged frames as defined by IEEE 802.1Q are used.

Type of Service (ToS) ToS comprises of 8 bits, 3 of which are designated as the IP precedence field in an IPv4 packet header. The IPv6 header field is called Traffic Class.

Differentiated Services Code Point (DSCP or DiffServ) One of the methods we can use for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks is DSCP. This technology uses a 6-bit differentiated services code point in the 8-bit Differentiated Services field (DS field) in the IP header for packet classification. DSCP allows for the creation of traffic classes that can be used to assign priorities. While IP precedence is the old way to mark ToS, DSCP is the new way. DSCP is backward compatible with IP precedence.

Layer 3 packet marking with IP precedence and DSCP is the most widely deployed marking option because layer 3 packet markings have end-to-end significance. Class Selector Class Selector uses the same 3 bits of the field as IP precedence and is used to indicate a 3-bit subset of DSCP values. Traffic Identifier (TID) TID, used in wireless frames, describe a 3-bit field in the QoS control field in 802.11. It’s very similar to CoS, so just remember CoS is wired Ethernet and TID is wireless.

Classification Marking Tools As discussed in the previous section, classification of traffic determines which type of traffic the packets or frames belong to, which then allows you to apply policies to it by marking, shaping, and policing. Always try to mark traffic as close to the trust boundary as possible. To classify traffic, we generally use three ways:

Markings This looks at header informant on existing layer 2 or 3 settings, and classification is based on existing markings. Addressing This classification technique looks at header information using source and destinations of interfaces, layer 2 and 3 addresses, and layer 4 port numbers. You can group traffic with devices using IP and traffic by type using port numbers. Application signatures This is the way to look at the information in the payload, and this classification technique is called deep packet inspection. Let’s dive deeper into deep packet inspection by discussing something called Network Based Application Recognition (NBAR). NBAR is a classifier that provides deep-packet inspection on layer 4 to 7 on a packet, however, know that using NBAR is the most CPU intensive technique compared to using addresses (IP or ports) or access control lists (ACLs). Since it’s not always possible to identify applications by looking at just layers 3 and 4, NBAR looks deep into the packet payload and compares the payload content against its signature database called a Packet Description Language Model (PDLM). There are two different modes of operation used with NBAR: Passive mode Using passive mode will give you real-time statistics on applications by protocol or interface as well as the bit rate, packet, and byte counts. Active mode Classifies applications for traffic marking so that QoS policies can be applied. Policing, Shaping, and Re-Marking Okay—now that we’ve identified and marked traffic, it’s time to put some action on our packet. We do this with bandwidth assignments, policing, shaping, queuing, or dropping. For example, if some traffic exceeds bandwidth, it might be delayed, dropped, or even re-marked in order to avoid congestion. Policers and shapers are two tools that identify and respond to traffic problems and are both rate limiters. 


Policers Since the policers make instant decisions you want to deploy them on the ingress if possible. This is because you want to drop traffic as soon as you receive it if it’s going to be dropped anyway. Even so, you can still place them on an egress to control the amount of traffic per class. When traffic is exceeded, policers don’t delay it, which means they do not introduce jitter or delay, they just check the traffic and can drop it or remark it. Just know that this means there’s a higher drop probability, it can cause a significant amount of TCP resends. Shapers Shapers are usually deployed between an enterprise network, on the egress side, and the service provider network to make sure you stay within the carrier’s contract rate. If the traffic does exceed the rate, it will get policed by the provider and dropped. This allows the traffic to meet the SLA and means there will be fewer TCP resends than with policers. Be aware that shaping does introduce jitter and delay. Just remember that policers drop traffic and shapers delay it. Policers have significant TCP resends and shapers do not. Shapers introduce delay and jitter, but policers do not.


This section and the next section on congestion avoidance will cover
congestion issues. If traffic exceeds network resource (always), the traffic gets queued, which is basically the temporary storage of backed-up packets. You perform queuing in order to avoid dropping packets. This isn’t a bad thing. It’s actually a good thing or all traffic would immediately be dropped if packets couldn’t get processed immediately. However, traffic classes like VoIP would actually be better off just being immediately dropped unless you can somehow guarantee delay-free
bandwidth for that traffic. When congestion occurs, the congestion management tools are activated.


Queuing (or buffering) Buffering is the logic of ordering packets in output buffers. It is activated only when congestion occurs. When queues fill up, packets can be reordered so that the higher-priority packets can be sent out of the exit interface sooner than the lower-priority ones. Scheduling This is the process of deciding which packet should be sent out next and occurs whether or not there is congestion on the link. Staying with scheduling for another minute, know that there are some schedule mechanisms that exist you really need to be familiar with. We’ll go over those, and then I’ll head back over to a detailed look at queuing:

Strict priority scheduling Low-priority queues are only serviced once the high-priority queues are empty. This is great if you are the one sending high-priority traffic, but it’s possible that low-priority queues will never be processed. We call this traffic or queue starvation.

Round-robin scheduling This is a rather fair technique because queues are serviced in a set sequence. You won’t have starving queues here, but real-time traffic suffers greatly. Weighted fair scheduling By weighing the queues, the scheduling process will service some queues more often than others, which is an upgrade over round-robin. You won’t have any starvation here either,
but unlike with round-robin, you can give priority to real-time traffic. It does not, however, provide bandwidth guarantees. Okay, let’s run back over and finish queueing. Queuing typically is a layer
3 process, but some queueing can occur at layer 2 or even layer 1. Interestingly, if a layer 2 queue fills up, the data can be pushed into layer 3 queues, and at layer 1 (called the transmit ring or TX-ring queue), when that fills up, the data will be pushed to layer 2 and 3 queues. This is when
QoS becomes active on the device. There are many different queuing mechanisms, with only two typically used today, but let’s take a look at the legacy queuing methods first: First in, first out (FIFO) A single queue with packets being processed in the exact order in which they arrived. Priority queuing (PQ) This is not really a good queuing method because lower-priority queues are served only when the higher-priority queues are empty. There are only four queues, and low-priority traffic
many never be sent. Custom queueing (CQ) With up to 16 queues and round-robin scheduling, CQ prevents low-level queue starvation and provides traffic guarantees. But it doesn’t provide strict priority for real-time traffic, so your VoIP traffic could end up being dropped. Weighted fair queuing (WFQ) This was actually a pretty popular way of queuing for a long time because it divided up the bandwidth by the number of flows, which provided bandwidth for all applications. This was great for real-time traffic, but it doesn’t offer any guarantees for a particular flow. Now that you know about all the not so good queuing methods to use, let’s take a look at the two newer queuing mechanisms that are recommended for today’s rich-media networks,

The two new and improved queuing mechanisms you should now use in today’s network are class-based weighted fair queuing and low latency queuing:

Class-based weighted fair queuing (CBWFQ) Provides fairness and bandwidth guarantees for all traffic, but it does not provide latency guarantees and is typically only used for data traffic management.

Low latency queuing (LLQ): LLQ is really the same thing as CBWFQ but with stricter priorities for real-time traffic. LLQ is great for both data and real-time traffic because it provides both latency and bandwidth guarantees.
you can see the LLQ queuing mechanism, which is suitable for networks with real-time traffic. If you remove the low-latency queue (at the top), you’re then left with CBWFQ, which is only used for data-traffic networks.


TCP changed our networking world when it introduced sliding windows as a flow-control mechanism in the mid-1990s. Flow control is a way for the receiving device to control the amount of traffic from a transmitting device.

If a problem occurred during a data transmission (always), the previous flow control methods used by TCP and other layer 4 protocols like SPX, that we used before sliding windows, the transmission rate would be in half, and left there at the same rate, or lower, for the duration of the connection. This was certainly a point of contention with users!

TCP actually does cut transmission rates drastically if a flow control issue occurs, but it increases the transmission rate once the missing segments are resolved or the packets are finally processed. Because of this behavior, and although it was awesome at the time, this method can result in what we call tail drop. Tail drop is definitely sub-optimal for today’s networks because using it, we’re not utilizing the bandwidth effectively. Just to clarify, tail drop refers to the dropping of packets as they arrive when the queues on the receiving interface are full. This is a waste of precious bandwidth since TCP will just keep resending the data until it’s happy again (meaning an ACK has been received). So now this brings up another new term, TCP global synchronization, where senders will reduce their transmission rate at the same time when packet loss occurs. Congestion avoidance starts dropping packets before a queue fills, and it drops the packets by using traffic weights instead just randomness. Cisco uses something called weighted random early detection (WRED), which is a queuing method that ensures that high-precedence traffic has lower loss rates than other traffic during congestion. This allows more important traffic, like VoIP, to be prioritized and dropped over what you’d consider less important traffic such as, for example, a connection to Facebook. 


f three traffic flows start at different times, as shown in the  and congestion occurs, using TCP could first cause tail drop, which drops the traffic as soon as it is received if the buffers are full. At that point TCP would start another traffic flow, synchronizing the TCP flows in waves, which would then leave much of the bandwidth unused.


This all-new chapter was totally focused on the CCNA objectives for intelligent networks. I started the chapter by covering switch stacking using Stack-Wise and then moved on to discuss the important realm of cloud computing and what effect it has on the enterprise network. Although this chapter had a lot of material, I stuck really close to the objectives on the more difficult subjects to help you tune in specifically to the content that’s important for the CCNA, including the following: Software Defined Networking (SDN), application programming interfaces (APIs), Cisco’s Application Policy Infrastructure Controller Enterprise Module (APIC-EM), Intelligent WAN, and finally, quality of service (QoS).


Understand switch stacking and Stack-Wise. You can connect up to  nine individual switches together to create a Stack-Wise. Understand basic cloud technology. Understand cloud services such as SaaS and others and how virtualization works. Have a deep understanding of QoS. You must understand QoS, specifically marking; device trust; prioritization for voice, video, and data; shaping; policing; and congestion management in detail.
Understand APIC-EM and the path trace. Read through the APICEM section as well as the APIC-EM path trace section, which cover the CCNA objectives fully. Understand SDN. Understand how a controller works, and especially the control and data plane, as well as the northbound and southbound APIs.


1. Which QoS mechanism is a 6-bit value that is used to describe the
meaning of the layer 3 IPv4 ToS field?
2. Southbound SDN interfaces are used between which two planes?
3. Which QoS mechanism is a term that is used to describe a 3-bit field
in the QoS control field of wireless frames?
4. What are the three general ways to classify traffic?
5. CoS is a layer 2 QoS __________?
6. A session is using more bandwidth than allocated. Which QoS
mechanism will drop the traffic?
7. What are the three SDN layers?
8. What are two examples of newer queuing mechanisms that are
recommended for rich-media networks?
9. What is a layer 4 to 7 deep-packet inspection classifier that is more
CPU intensive than marking?
10. __________APIs are responsible for the communication between
the SDN controller and the services running over the network.


1. Which of the following is a congestion-avoidance mechanism?

A. LMI
B. WRED
C. QPM
D. QoS

2. Which of the following are true regarding Stack-Wise? (Choose two.)

A. A Stack-Wise interconnect cable is used to connect the switches to
     create a bidirectional, closed-loop path.
B. A Stack-Wise interconnect cable is used to connect the switches to
     create a unidirectional, closed-loop path.
C. Stack-Wise can connect up to nine individual switches joined in a
     single logical switching unit.
D. Stack-Wise can connect up to nine individual switches joined into
     multiple logical switching units and managed by one IP address.

3. Which of the following is the best definition of cloud computing?

A.  data center
B. Computing model with all your data at the service provider
C. On-demand computing model
D. Computing model with all your data in your local data center

4. Which three features are properties and one-way requirements for voice traffic? (Choose three.)

A. Bursty voice traffic.
B. Smooth voice traffic.
C. Latency should be below 400 ms.
D. Latency should be below 150 ms.
E. Bandwidth is roughly between 30 and 128 kbps.
F. Bandwidth is roughly between 0.5 and 20 Mbps.

5. On which SDN architecture layer does Cisco APIC-EM reside?

A. Data
B. Control
C. Presentation
D. Application

6. In which cloud service model is the customer responsible for
managing the operating system, software, platforms, and
applications?

A. IaaS
B. SaaS
C. PaaS
D. APIC-EM

7. Which statement about QoS trust boundaries or domains is true?

A. The trust boundary is always a router.
B. PCs, printers, and tablets are usually part of a trusted domain.
C. An IP phone is a common trust boundary.
D. Routing will not work unless the service provider and the
     enterprise network are one single trust domain.

8. Which statement about IWAN is correct?

A. The IWAN allows transport-independent connectivity.
B. The IWAN allows only static routing.
C. The IWAN does not provide application visibility because only
     encrypted traffic is transported.
D. The IWAN needs special encrypting devices to provide an
      acceptable security level.

9. Which advanced classification tool can be used to classify data
applications?

A. NBAR
B. MPLS
C. APIC-EM
D. ToS

10. The DSCP field constitutes how many bits in the IP header?

A. 3 bits
B. 4 bits
C. 6 bits
D. 8 bits

11. Between which two planes are SDN southbound interfaces used?

A. Control
B. Data
C. Routing
D. Application

12. Which option is a layer 2 QoS marking?

A. EXP
B. QoS group
C. DSCP
D. CoS

13. You are starting to use SDN in your network. What does this mean?

A. You no longer have to work anymore, but you’ll get paid more.
B. You’ll need to upgrade all your applications.
C. You’ll need to get rid of all Cisco switches.
D. You now have more time to react faster when you receive a new business requirement.

14. Which QoS mechanism will drop traffic if a session uses more than the allotted bandwidth?

A. Congestion management
B. Shaping
C. Policing
D. Marking

15. Which three layers are part of the SDN architecture? (Choose three.)

A. Network
B. Data Link
C. Control
D. Data
E. Transport
F. Application

16. Which of the following is NOT true about APIC-EM ACL analysis?

A. Fast comparison of ACLs between devices to visualize difference and identify                       misconfigurations
B.  Inspection, interrogation, and analysis of network access control policies
C. Ability to provide layer 4 to layer 7 deep-packet inspection
D. Ability to trace application-specific paths between end devices to
     quickly identify ACLs and other problem areas

17. Which two of the following are not part of APIC-EM?

A. Southbound APIs are used for communication between the controllers and network               devices.
B. Northbound APIs are used for communication between the controllers and network               devices.
C. OnePK is Cisco proprietary.
D. The control plane is responsible for the forwarding of frames or
     packets.

18. When stacking switches, which is true? (Choose two.)

A. The stack is managed as multiple objects and has a single management IP address.
B. The stack is managed as a single object and has a single management IP address.
C. The master switch is chosen when you configure the first switch’s master algorithm to on.
D. The master switch is elected from one of the stack member
      switches.

19. Which of the following services provides the operating system and the
network?

A. IaaS
B. PaaS
C. SaaS
D. None of the above

20. Which of the following services provides the required software, the
operating system, and the network?

A. IaaS
B. PaaS
C. SaaS
D. None of the above

21. Which of the following is NOT a benefit of cloud computing for a
cloud user?

A. On-demand, self-service resource provisioning
B. Centralized appearance of resources
C. Local backups
D. Highly available, horizontally scaled application architectures

Comments

Popular posts from this blog

What if Analysis

What-If Analysis What-If Analysis in Excel allows you to try out different values (scenarios) for formulas. The following example helps you master what-if analysis quickly and easily.  Use scenarios to consider many different variables  A scenario is a set of values that Excel saves and can substitute automatically in cells on a worksheet. You can create and save different groups of values on a worksheet and then switch to any of these new scenarios to view different results. 
Create Different Scenarios 
Note: You can simply type in a different revenue and Cost into cell B2 and B3 respectively to see the corresponding result of a scenario in cell B4. However, what-if analysis enables you to easily compare the results of different scenarios.  
I. On the Data tab, click What-If Analysis and select Scenario Manager from the list. The Scenario Manager Dialog box appears  II. Add a scenario by clicking on Add.  III. Type a name (e.g. “First Case”), select cell B2 and B3 (represents “Revenue” and “…

PROFESSIONAL ENGLISH

Asking For and Giving Opinions on Likes and Dislikes

Words Meaning Sample Sentence Opinion A statement or judgment formed about some matter. Bhoomika gave her final opinion on the company’s matter. Dialogue A conversation between two or more people. Her dialogue stated her opinion about the company’s matter. Expression The action of making known one’s thought or feelings. Her expression was sad at the meeting. Frank An open, honest, and direct speech or writing Bhoomika is very frank with her friends. Recover Return to normal state of health, mind or strength. The company’s economic crisis will be recovered soon. Turmoil A state of great disturbance. The company is facing financial turmoil. Economics The branch of knowledge concerned with the production, consumption, and transfer of wealth. Bhoomika studied Economics at the State University. Betrayed Expose to danger by treacherously giving information to an enemy.

DAILY LIFE VOCABULARY

Apology Etiquette and Office Vocabulary 

Chapter Vocabulary

Word Meaning Sample Sentence Stressed A state of any mental or emotional tension. Ram seems much stressed after his poor exam. Launch An act of instance of starting something. The government launched a new scheme for the poor people. Error A mistake Ravi found a grammatical error in his new grammar book. Scold Blaming someone for any wrong doing Bhuvan scolded his employees for their poor performance. Accuse Claiming that someone has done something wrong. Bharati accuses her friend Chaya for stealing her necklace. Fair Good and honest Ravi got promoted for doing a fair job. Ashamed Embarrassed or guilty because of one’s action. <