Network Infrastructure

Application Management in a Software-Defined Data Middle

Rick Sturm , ... Julie Craig , in Application Functioning Management (APM) in the Digital Enterprise, 2017

Software-Defined Networking

Network infrastructure must be abstracted for consumption by workloads in the SDDC. Software-defined networking (SDN) emerged as a critical enabler of network within the SDDC. The Open Networking Foundation (ONF), a nonprofit consortium that maintains stewardship over the OpenFlow SDN protocol, defines SDN equally the separation of the control and data planes in network devices, where the control aeroplane is consolidated within a centralized controller that programs network flow rules into individual data aeroplane devices. 1 In this mode, SDN's ability to employ a single, logically isolated computing infrastructure within which discrete networks can easily exist created allows organizations to move from production to evolution to test. Interestingly, decoupling the control plane and data plane was non the most important SDN characteristic identified by respondents to a 2016 Enterprise Direction Associates End-User Inquiry Report on the impacts of SDN and network virtualization on network management. 2 Table 14.1 shows the percentage of respondents who identified a diverseness of defining SDN characteristics that are important to the solutions they implement.

Table 14.1. SDN Defining Characteristics Important to Solution Implementation

SDN Characteristic Per centum (%)
Centralized controller 35
Low-cost hardware 28
Fluid network architecture 25
Open source software 24
Software-only solutions with no hardware refresh 24
OpenFlow protocol 21
Decoupling the control plane and data plane 11

The OpenFlow protocol identified by respondents to the Enterprise Management Associates (EMA) survey was created by the Open Networking Foundation (ONF) to standardize disquisitional elements of the SDN architecture and is the beginning standard interface designed specifically for SDN. The standard is designed to provide high-performance, granular traffic control across the network devices of multiple vendors. Table xiv.2 shows the benefits that can be achieved by using the OpenFlow protocol.

Table fourteen.two. Benefits of SDN OpenFlow Protocol

Centralized direction and control of networking devices from multiple vendors
Improved automation and management
Rapid innovation through new network capabilities and services without the demand to configure individual devices or wait for vendor releases
Programmability past operators, enterprises, independent software vendors and users
Increased network reliability and security
More than granular network control with power to utilise comprehensive and wide-ranging policies at session, user, device, and awarding levels
Ameliorate terminate-user experience

Adjusted from Software-Defined Networking: The New Norm for Networks, April xiii, 2012. Accessible from: https://world wide web.opennetworking.org/images/stories/downloads/sdn-resource/white-papers/wp-sdn-newnorm.pdf.

OpenFlow began every bit a Stanford University research project in 2008. Vendors and big enterprises started productizing the technology and implementing SDN in 2011. Data center mega-user Google built its own SDN switches and was the first company to build a global software-driven network. Meanwhile, vendors including Microsoft, VMware, Cisco, and Brocade, released OpenFlow-friendly products, or other SDN technologies, such as software overlays or policy-based networking.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128040188000140

IT Infrastructure Security Program

In Firewall Policies and VPN Configurations, 2006

Topologies

Network infrastructure security:

1.

Create secure boundaries using firewalls, DMZs, and proxy servers.

ii.

Create secure remote admission.

iii.

Create secure wireless access.

4.

Implement a segmented network.

5.

Implement network traffic security protocols for sensitive network traffic.

half dozen.

Deploy network security technologies.

ane.

Use Encrypting File Arrangement (EFS) or similar file encryption.

2.

Crave and use potent user authentication, passwords and account policies.

three.

Utilize the concept of "least privileges" when assigning user rights.

Security infrastructure components include routers, proxy servers, firewalls, and DMZs. Firewalls are pretty straightforward and can be implemented as hardware or software solutions. Let'south accept a side street and take a quick look at DMZs.

Demilitarized zones, or DMZs, are isolated network segments that typically sit down between the Internet and your network, whether in forepart of or behind your firewall (or between two firewalls).There are many dissimilar means to set up a DMZ; again, it's exterior the scope of this book to discuss the blueprint, implementation, and configuration of a DMZ. Still, it might be helpful to discuss a few highlights of DMZ design that might assist equally you await at implementing or tightening a DMZ for your network.

Designing DMZs

DMZ design, like security design, is ever a work in progress. Every bit in security planning and assay, we find DMZ pattern carries nifty flexibility and change potential to proceed the protection levels we put in identify in an constructive country. The ongoing work is required so that the system's security is e'er as loftier as we can make information technology within the constraints of fourth dimension and budget, while still allowing appropriate users and visitors to access the information and services we provide. You will find that the time and funds spent in the design process and preparation for the implementation are very good investments if the process is focused and constructive; this will lead to a loftier level of success and a good level of protection for your network.

In this department of the affiliate, we explore the fundamentals of the design process. We incorporate the data we discussed in relation to security and traffic period to make decisions about how our initial design should expect. Additionally, we'll build on that information and review some other areas of business that could touch on the mode you pattern your DMZ structure.

Design of the DMZ is critically important to the overall protection of your internal network—and the success of your firewall and DMZ deployment. The DMZ design can comprise sections that isolate incoming VPN traffic, Spider web traffic, partner connections, employee connections, and public access to information provided by your organisation. Design of the DMZ construction throughout the organization can protect internal resources from internal assail. Every bit we discussed in the security section, it has been well documented that much of the hazard of data loss, corruption, and breach really exists within the network perimeter. Our tendency is to protect assets from external harm but to disregard the dangers that come from our own internal equipment, policies, and employees.

These attacks or disruptions practise not arise solely from disgruntled employees. by well-intentioned employees. Each of these entry points is a potential source of loss for your organization and ultimately can provide an set on point to defeat your other defenses. Additionally, the blueprint of your DMZ will let you to implement a of failure in your plan. This minimizes the problems and loss of protection that can occur considering of misconfiguration of dominion sets or ACL lists, as well as reducing the problems that can occur due to hardware configuration errors.

Remote Access

Remote access is granted in a number of different ways, and then the style it should be secured varies widely. The nuts are that the remote admission servers should exist physically secured (as should all infrastructure servers) in an access-controlled location. The number of accounts that are authorized to log onto the server for authoritative purposes should exist limited and audited. The communication link between the RAS and the remote users should be secured, every bit should the information on that link, if needed. The network traffic security methods include signing, encryption, and tunneling.

The level of these methods is determined past the system with the least capabilities. Older operating systems cannot utilize the latest encryption technologies, for example, and then you might include policies that require that remotely connecting users use the latest version of Windows XP Professional, to enable the entire terminate-to-cease communication link to use the strongest available encryption. Yous can also require strong authentication across remote links. Dissimilar operating systems implement this differently; in Windows Server 2003, for example, it'south implemented through policies prepare in Administrative Tools | Routing and Remote Access.

Wireless Access

We've devoted a whole chapter to wireless security, so we volition but discuss the top-level items here:

Alter access bespeak default settings.

Disable SSID dissemination; create a closed system (does not respond to clients with "Any" SSID assigned).

Transmission ability control (limiting the corporeality of ability used for manual to control the signal range).

Enable MAC address filtering.

Enable WEP or WPA.

Filter protocols.

Define IP allocations for the WLAN.

Utilize VPNs.

Secure users' computers.

All these choices have pros and cons, distinct advantages and disadvantages; you'll need to decide the right approach for your arrangement. Every bit with all things in It security, it'south important that you understand the consequence of the solutions you're using, sympathise the configuration and maintenance of these elements, and exist certain you test them well in a lab or isolated setting before implementing them beyond the enterprise.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597490887500098

The Evolution of Communication Systems

Vinod Joseph , Brett Chapman , in Deploying QoS for Cisco IP and Side by side Generation Networks, 2009

one.23 Summary

The NGN infrastructure will carry existent-time traffic in the form of vocalization and video. Voice would be from traditional fixed-line business, mobile users for 3G, and VoIP, all with high packet loss, filibuster, and jitter sensitivity. Signaling for the voice services would likewise be carried across the converged infrastructure, requiring priority. The network will as well be carrying data with varying tolerance to SLA parameters. Premium paying customers would also wait their data to be differentiated based on the additional fees they are charged.

Video in real time introduces extreme packet-loss sensitivity previously unheard of in the IP earth. Losses greater than one packet in 1 million are often considered unacceptable. High-definition streams can attain 10   Mbps each depending on the compression technology, placing a huge demand on bandwidth with the deployment of video on need across broadband.

If the NGN infrastructure does not honour the requirements of each of the services adequately, the costs tin can be very high. Aside from obvious costs such equally liquidated amercement and lost call minutes, there is the e'er-present, less tangible impact on customer satisfaction and experience and, later, brand erosion.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B978012374461600001X

Security as an Ongoing Process

Eric Seagren , in Secure Your Network for Free, 2007

Network Infrastructure Devices

The network infrastructure includes annihilation that is part of the network rather than being on the network, or using the network. These are the devices that motion data through the network and include routers, switches, firewalls, and bridges. All these devices will require patching sooner or later. One of the biggest considerations you have to work around when developing your patch direction procedures is the patch release schedule. Evidently, y'all tin't patch a system if the patch isn't out nevertheless. In the case of network devices, particularly firewalls and other security devices, patches for security vulnerabilities are typically released very quickly and ofttimes. This means your patch management organization will need to include a process to schedule and prioritize the testing and application of the patches. Having a ready schedule for patching, for example monthly, at to the lowest degree for the noncritical patches, enables the business concern units to schedule around the patching windows and business relationship for any outages. By distributing a patching schedule, you lot assistance minimize the impact of your patching efforts. If you wait a calendar week to patch a critical hole in your Internet-facing firewall, yous are gambling that the hackers don't find your vulnerable firewall in the next week; on the other hand, if the patch is inadequately tested, y'all could create a service disruption if the patch causes any unexpected problems.

E'er continue in heed that when applying patches to infrastructure devices the potential for service disruption is loftier. Because all your other devices rely on the underlying network infrastructure for communication, complications at the network level can have devastating consequences. If a single server patch causes the service to quit functioning, you are without the services of that ane server. If the aforementioned matter happens to a cadre router for the network, it'due south possible that no network systems volition be able to part. Because of the high potential for service disruption and the large scope of potential affect, patches to infrastructure devices should be tested extra thoroughly before existence deployed. Although the risk that the patch volition break something is mostly low, you want to put item emphasis on the testing of any optional or less-common features you may be using.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9781597491235500108

Endpoint Security

Keith Lewis , in Computer and Data Security Handbook (Tertiary Edition), 2017

2 Endpoint Solution: Options

Computer network infrastructures must have organized and secured framework methodologies when remote computers such as laptops or wireless enabled devices connect to them. Using manufacture-supported toolsets such as Symantec, Kaspersky, Sophos, Bitdefender, McAfee, TrendMicro, or Microsoft Arrangement Center Endpoint protection systems [ii] are some of the highly rated systems available today. This is the foundation and philosophy of architecture computer network blueprint when it comes to EPS. This network security framework and approach helps deliver to your network applied science teams more than support alternatives with command features when it comes to managing the security for these devices connecting to your network. Predicting and securing possible infection routes for a viruses or malware attack to have advantage of is key for an effective defense-in-depth approach encompassing all network communication access points on your visitor or organization'south topology [3].

Hackers attempting to infiltrate your network systems would have limited access bespeak potentials to break into, thanks to EPS Planning Adventure Direction. Client and employee user authorization security group settings that are setup to use role-based configuration profiles can also leverage and benefit from EPS solutions for easier-to-manage security and back up coverage.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128038437000788

Challenges and Countermeasures

Aditya K Sood , Richard Enbody , in Targeted Cyber Attacks, 2014

8.2.4 Network Level Security

The network infrastructure and advice channels should be secured to deploy additional layers of security equally discussed as follows:

Organization should deploy robust network perimeter security defenses such as IPS and IDS, email filtering solutions, and firewalls to restrict the entry of malicious code in the internal network. Organizations should install robust Domain Proper noun System (DNS) sinkholes to prevent resolution of illegitimate domains to restrict malicious traffic. Sinkholing is primarily based on the DNS protocol, and the servers are configured to provide falsified information (nonroutable addresses) to the compromised machines running malware. As a effect, the malware fails to communicate with the control server and hence data exfiltration is stopped. This strategy should be opted to implement ambitious detection and prevention machinery to subvert the communication with the malicious servers on the Internet. The sinkholes restrict the occurrence of infections in a silent way. Null is bulletproof, but perimeter defenses such every bit sinkholes add together a lot to the security posture of the system.

Implementation of Honeynet is also an constructive strategy to understand the nature of malware. Honeynet is a network of interconnected systems called as Honeypots that are running with vulnerable configuration, and the malware is allowed to install successfully in one of the systems in the Honeynet. This helps in understanding the malware design and behavior. The harnessed knowledge is used to build secure network defenses.

Potent traffic monitoring solutions should be deployed on the edges of the networks to filter the egress and ingress traffic flowing through the network infrastructure. The motive is to determine and fingerprint the flow of malicious traffic in the network. At the same time, agile monitoring helps to understand user surfing habits and the domains they connect to. In addition, Security Information and Event Direction (SIEM) solutions help administrators detect anomalous events occurring in the network. The primary motive behind building SIEM platform is that it takes the output from various resources in the network, that is, events happening in the network, associated threats, and accompanying risks to build stiff intelligence feed. This helps the analysts to reduce the impact on the business and to harden the security posture. SIEM is a centralized arrangement that performs correlation and data aggregation over the network traffic to raise alerts.

Sensitive data flowing to and from the network should be properly encrypted. For instance, all the web traffic with sensitive data should be sent over an HTTPS channel. HTTPS means that all the data sent using HTTP is encrypted using SSL. Basically, HTTP is served over SSL and to implement this, the webserver has to outset run the SSL service on a specific port and it should serve the SSL certificate to the browser (whatever client) before starting communication over HTTP. This results in encryption of all the HTTP data exchanged betwixt the browser and the webserver. Notwithstanding, SSL implementation is too available for unlike protocols. This protocol prevents active MitM attacks which let malicious intruder to inject arbitrary lawmaking to decrypt the encrypted advice aqueduct on the fly.

Administrators should clarify server logs on a regular basis to observe traces of attacks or malicious traffic. The administrators look for the attack patterns related to several vulnerabilities such as injection attacks, file uploading attacks, and brute force attacks. Log provides a plethora of information such as source IP accost, timestamps, port access, and number of specific requests handled past the server. Administrators can dissect the malicious traffic to understand the nature of set on. Regular log analysis should exist a function of the security procedure and must exist performed on a routine ground.

Enterprise networks should be properly segregated using well-constructed virtual local area networks which segment the primary network into smaller networks, and then that stiff access rights can be configured. Basically, dividing network into small segments can assist the administrators to deploy security at a granular level.

Administrators should follow the all-time secure device configuration practices such every bit configuring stiff and circuitous passwords for network devices such as routers, printers, and switches, using Simple Network Management Protocol (SNMP) strings that cannot be guessed, fugitive the utilise of clear text protocols, disabling unrequired services, deploying to the lowest degree privilege principles for restricting access to resources, configuring software installation and device change direction policy, and out-of-band management features.

Read total chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128006047000085

IT Infrastructure

David Watson , Andrew Jones , in Digital Forensics Processing and Procedures, 2013

7.three Infrastructure

The Forensic Laboratory network infrastructure is in two separate parts, the business concern infrastructure and the physically separated forensic instance processing infrastructure.

The forensic case processing network is a totally closed network in that it has no external links permitted.

The business network is as well a airtight network but with admission to the Internet, simply this is strictly controlled by using firewalls.

The Forensic Laboratory does not permit wireless admission to whatsoever of its resources.

Security of the network connections within the Forensic Laboratory is covered in Chapter 7, Section vii.7.

7.3.ane Equipment

The network infrastructure for both the business and the case processing networks comprise the following components:

cabling;

firewalls;

routers;

servers;

switches;

the SAN.

The networks are congenital and maintained by the IT Department, every bit required, in the Forensic Laboratory.

7.iii.2 Securing of Cabling

Cabling is used to connect all IT equipment inside the Forensic Laboratory. The Forensic Laboratory has made the conscious decision not to let wireless connections on account of the textile it processes and the possible risks of wireless networking. The policy for securing Information technology cabling in the Forensic Laboratory is given in Appendix v.

7.3.2.1 Procedure for Siting and Protecting Information technology Cabling

When installing new or upgraded It cabling, all possible steps must be taken to protect it from physical risks, to protect data from security threats, and to minimize possible risks from ecology hazards. The post-obit steps are undertaken:

1.

A demand is identified for installation of new IT cabling or replacement or repair of existing cabling.

2.

The Information technology Managing director, the Information Security Director, and the Laboratory Director (if appropriate) perform an assessment to:

consider the requirements of the Forensic Laboratory with regard to installation of the cabling;

consider all physical and environmental problems;

consider all security issues regarding the physical location of cabling within the Forensic Laboratory premises;

consider all security issues regarding the data carried on cabling and its nomenclature;

determine where the cabling is best routed, and where whatever associated equipment is all-time sited;

during this assessment the Information technology Manager, the Data Security Director, and the Laboratory Manager (if appropriate) may:

-

consult other Forensic Laboratory employees as required (for case, IT, or non-Information technology employees, or Managers who may be using or sited well-nigh to the new cabling and associated equipment);

-

consider all issues outlined in the Forensic Laboratory Policy for Securing IT Cabling, as defined in Appendix 5;

-

consider isolation of the equipment (to allow the Forensic Laboratory IT to reduce the full general level of protection that is required) if required and defined in Department vii.3.3;

-

consider the impact of a disaster in nearby premises.

3.

The Information technology Manager, in clan with the Information Security Manager and the Laboratory Manager (if appropriate), makes a decision as to where the new cabling and any associated equipment is to be sited.

4.

The IT Director communicates with all interested parties as needed to:

outline the conclusion regarding the routing of the new cabling and the siting of whatsoever associated equipment;

outline the reasons for the conclusion;

invite further comments (if required).

v.

Any bug that ascend at this stage must be agreed and confirmed before the new cabling is installed.

6.

The cabling is installed in accordance with the agreed atmospheric condition.

seven.

The It Director, the Information Security Manager, and the Laboratory Manager (if appropriate) perform a review to:

ensure that the new cabling has been routed in accordance with the agreed weather condition;

ensure that the new cabling has been afforded the all-time possible protection from all potential security threats;

address whatsoever issues that may have become evident after installation.

8.

In the event that changes are required, the IT Manager e-mails the relevant stakeholders to outline proposed changes, and the changes are implemented in accordance with standard Forensic Laboratory It change management procedures, as defined in Department 7.4.iii.

Note

In the United states, much of this is dictated by the National Fire Protection Association publication #seventy: National Electric Code, which is the benchmark for condom electric design, installation, and inspection to protect people and property from electrical hazards. Other jurisdictions may have similar requirements, and these must be followed as applicable.

7.3.3 Isolating Sensitive Systems

In the event that the Forensic Laboratory manages or uses a system that contains sensitive or confidential data where a Customer requires a defended calculating surround that is physically and logically segregated from other systems holding less critical information, the following guidelines should be followed for a defended computing surroundings:

apply operating system and applications hardening procedures where possible;

logical segregation via VLANs;

physical segregation via separate rooms, dedicated servers, or computers;

use of physical admission control mechanisms;

employ of strong authentication methods;

when a sensitive awarding is to run in a shared surround, employ strict resource, file, or object share or permission controls.

7.three.4 Siting and Protecting IT Equipment

IT equipment within the Forensic Laboratory has specific needs in addition to the baseline physical security implemented inside the Forensic Laboratory as a whole. All data processing equipment and information under the command of the Forensic Laboratory It Department must be carefully sited to physically protect that information processing equipment or data from security threats, and to minimize potential risks from environmental hazards. The Forensic Laboratory policy for siting and protecting IT equipment is given in Appendix vi.

7.3.4.one Process for Siting and Protecting IT Equipment

The Forensic Laboratory should have the following procedures in place to make up one's mind how new information processing equipment is to exist installed in order to physically protect information technology from security threats and to minimize possible risks from environmental hazards.

i.

A need is identified for installation of a new item of Information technology equipment.

ii.

The IT Manager, in association with the Information Security Manager and the Laboratory Manager (if appropriate), performs a adventure assessment to:

consider all usage requirements;

consider all security issues regarding the equipment's usage and location within the Forensic Laboratory bounds;

determine where the equipment is best sited;

during this assessment the It Manager, in clan with the Information Security Manager and the Laboratory Manager (if appropriate):

-

consults other employees as required (for example, members of the IT Department, other business concern users, and/or Managers who may be using or sited well-nigh to the new equipment);

-

considers all issues outlined in the Forensic Laboratory Policy for Siting and Protecting Information technology Equipment, as defined in Appendix 6.

additional items that may warrant consideration for particular items of equipment that may require special protection are:

-

isolation of the equipment (to permit the Forensic Laboratory to reduce the general level of protection that is required);

-

the impact of a disaster in nearby premises.

3.

The IT Manager, in clan with the Information Security Manager and the Laboratory Manager (if appropriate), makes a decision as to where the new equipment is to be sited to afford it the all-time protection within the Forensic Laboratory.

4.

The IT Manager e-mails all interested parties as needed to:

outline the decision regarding the siting of the new equipment;

outline the reasons for the decision/proposed location;

invite further comments (if required).

5.

Whatever issues that arise at this stage must exist agreed and confirmed earlier the new equipment is installed.

half dozen.

The new equipment is installed in accordance with the agreed conditions after being approved by the CAB, as part of the Forensic Laboratory Modify Management Procedure, every bit defined in Section seven.iv.3.

7.

The Information technology Manager performs a review to:

ensure that the new equipment has been sited in accordance with the agreed conditions;

ensure that the new equipment cabling has been afforded the best possible protection from all potential security threats;

address whatsoever issues that may accept become evident subsequently installation.

7.3.five Securing Supporting Utilities

The IT Director controls the security of information processing equipment and information in terms of supporting utilities in lodge to minimize loss and damage to the business.

Special controls are implemented to safeguard supporting utilities for data processing equipment and information processing facilities:

a generator or other alternate power supplies for the Forensic Laboratory is available and is maintained and regularly tested;

all of the utilities are monitored to determine if thresholds are breached at which point alarms are sounded. This includes:

-

water detection;

-

power failure or variation;

-

UPS battery life and stability;

-

air-conditioning;

-

humidity;

-

oestrus;

-

fume.

all servers are dual power sourced from different supplies;

a UPS for the Forensic Laboratory is available on all critical servers, telephone switches, and other disquisitional infrastructure, and is regularly tested;

basic safeguards are used, i.e., health and rubber all-time exercise;

Cat five or Cat vi cabling and mains electrical cabling must exist separated and not use the same ducting;

emergency ability-off switches are available most the exit doors of the Server Rooms;

burn down detection and fire quenching is advisable and in place, as divers in Chapter 2, Department 2.three.four;

the air workout has sufficient redundancy to permit for a single failure and accept enough power to go on the area at the advisable temperature;

the water supply is stable and acceptable for burn suppression purposes.

All of the above are monitored using a centralized building management system and alerts raised and sent to the appropriate managers. The Facilities Manager is ever alerted for all breaches.

Note

All other utilities in the building are normally nether the control of the utility companies and the Forensic Laboratory will be dependent on these and have no command over their supply.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/commodity/pii/B9781597497428000078

Diagramming the Network Infrastructure

Dale Liu , in Cisco Router and Switch Forensics, 2009

Summary

Documenting a network infrastructure is more than simply dragging and dropping shapes around in a diagramming packet. The network diagram, and accompanying files, should be able to show not merely the concrete elements of the network, such every bit where the wireless access points are or how many routers there are, but also how the network behaves. The logical implementation of network subnets—physical or virtual—should be easy to follow; the logical implementation of services should be understandable.

Supporting the network diagram should be documentation of the configurations of the major services and devices on the network. A breakup of the rules that take been implemented on the firewall should evidence which types of traffic and the source and destination that each dominion manages. The configuration of IDSs and logging systems should also be detailed, to the point where these systems could be rebuilt if it were necessary.

Documenting the use and configuration of services should also exist completed, for central services such as e-post and database systems, along with network infrastructure services disquisitional to the functioning of the network, such as DHCP, DNS, and ACLs that govern traffic flow.

There is no doubt that diagramming all this information is a large task, just if it'south been diagrammed it means it's been reviewed and understood, which is the ultimate goal.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597494182000053

Cloud Access and Deject Interconnection Networks

Dan C. Marinescu , in Cloud Calculating (2d Edition), 2018

v.2 The Transformation of the Internet

The Internet is continually evolving nether the force per unit area of its ain success and the need to suit new applications and a larger number of users. Initially conceived equally a information network, a network designed to transport data files, the Internet has morphed into today's network supporting data-streaming and applications with real-fourth dimension constraints such as the Lambda service offered by the AWS. The discussion in this department is restricted to the aspects of the Internet evolution relevant to cloud calculating.

Tier 1, 2, and 3 networks. To understand the architectural consequences of Internet evolution we discuss first the relation betwixt two networks. Peering ways that two networks exchange traffic between each other's customers freely. Transit requires a network to pay some other one for accessing the Internet. The term customer means that a network is receiving money to allow Internet access.

Based on these relations the networks are unremarkably classified every bit Tier 1, 2, and 3. A Tier 1 network can reach every other network on the Internet without purchasing IP transit or paying settlements; examples of Tire ane networks are Verizon, ATT, NTT, Deutsche Telecom, see Figure v.iv.

Figure 5.4

Figure 5.4. The relation of Internet networks based on the transit and paying settlements. There are three classes of networks, Tier one, 2, and iii; an IXP is a physical infrastructure allowing ISPs to commutation Net traffic.

A Tier 2 network is an Internet service provider who engages in the do of peering with other networks, but who yet purchases IP transit to reach some portion of the Net; Tier two providers are the most common providers on the Internet. A Tier 3 network purchases transit rights from other networks (typically Tier 2 networks) to reach the Internet. A signal-of-presence (Pop) is an access point from one place to the rest of the Internet.

An Internet exchange point (IXP) is a physical infrastructure assuasive Cyberspace Service Providers (ISPs) to substitution Internet traffic. IXPs interconnect networks directly, via the exchange, rather than through one or more than third party networks. The advantages of the direct interconnection are numerous, but the main reasons to implement an IXP are cost, latency, and bandwidth. Traffic passing through an exchange is typically non billed by any party, whereas traffic to an ISP's upstream provider is.

IXPs reduce the portion of an Internet service provider'southward traffic which must be delivered via their upstream transit providers, thereby reducing the average per-bit delivery toll of their service. Furthermore, the increased number of paths found through the IXP improves routing efficiency and error-tolerance. A typical IXP consists of one or more network switches, to which each of the participating ISPs connects.

New technologies such as spider web applications, cloud computing, and content-commitment networks are reshaping the definition of a network as we can see in Figure 5.5 [287]. The World wide web, gaming, and entertainment are merging and more figurer applications are moving to the cloud. Data streaming consumes an increasingly larger fraction of the available bandwidth every bit loftier definition Tv set sets become less expensive and content providers such as Netflix and Hulu offer customers services that crave a significant increase of the network bandwidth.

Figure 5.5

Figure v.5. The transformation of the Internet; the traffic carried by Tier three networks increased from 5.viii% in 2007 to 9.4% in 2009; Goggle applications accounted for 5.2% of the traffic in 2009 [287].

Does the network infrastructure adequately reply to the electric current need for bandwidth? The Internet infrastructure in the US is falling backside in terms of network bandwidth, see Figure v.half-dozen. A natural question to ask is: Where is the actual bottleneck limiting the bandwidth available to a typical Net broadband user? The respond is: the "last mile," the link connecting the home to the Isp network. Recognizing that the broadband access infrastructure ensures continual growth of the economy and allows people to piece of work from any site, Google has initiated the Google Fiber Project which aims to provide a one Gbps admission speed to private households through FTTH. 3

Figure 5.6

Effigy 5.half-dozen. The broadband access, the average download speed advertised by several countries.

Migration to IPv6. The Internet Protocol, Version 4 (IPv4), provides an addressing capability of two32, or approximately 4.3 billion addresses, a number that proved to be insufficient. Indeed, the Internet Assigned Numbers Authority (IANA) assigned the last batch of 5 address blocks to the Regional Net Registries in February 2011, officially depleting the global pool of completely fresh blocks of addresses; each of the address blocks represents approximately 16.vii million possible addresses.

The Internet Protocol, Version six (IPv6), provides an addressing capability of 2128, or 3.4 × 10 38 addresses. There are other major differences between IPv4 and IPv6:

Multicasting. IPv6 does not implement traditional IP broadcast, i.e. the manual of a bundle to all hosts on the attached link using a special broadcast accost and, therefore, does non ascertain broadcast addresses. IPv6 supports new multicast solutions, including embedding rendezvous point addresses in an IPv6 multicast group address. This solution simplifies the deployment of inter-domain solutions.

Stateless accost autoconfiguration (SLAAC). IPv6 hosts tin configure themselves automatically when connected to a routed IPv6 network using the Internet Command Bulletin Protocol version 6 (ICMPv6) router discovery messages. When start connected to a network, a host sends a link-local router solicitation multicast asking for its configuration parameters. If suitably configured, routers respond to such a request with a router advertisement parcel that contains network-layer configuration parameters.

Mandatory support for network security. Internet Network Security (IPsec) is an integral part of the base protocol suite in IPv6 while information technology is optional for IPv4. IPsec is a protocol suite operating at the IP layer. Each IP packet is authenticated and encrypted. Other security protocols, e.g., the Secure Sockets Layer (SSL), the Transport Layer Security (TLS) and the Secure Shell (SSH) operate at the upper layers of the TCP/IP suite. IPsec uses several protocols: (1) Authentication Header (AH) supports connectionless integrity, data origin authentication for IP datagrams, and protection against replay attacks; (two) Encapsulating Security Payload (ESP) supports confidentiality, data-origin authentication, connectionless integrity, an anti-replay service, and limited traffic-flow confidentiality; (3) Security Association (SA) provides the parameters necessary to operate the AH and/or ESP operations.

Unfortunately, migration to IPv6 is a very challenging and costly proposition [115]. A simple analogy allows us to explain the difficulties related to migration to IPv6. The phone numbers in Due north America consist of x decimal digits. This scheme supports up to x billion phones but, in do, nosotros have fewer available numbers. Indeed, some phone numbers are wasted considering nosotros use area codes based on geographic proximity and, on the other mitt not all bachelor numbers in a given expanse are allocated.

To overcome the express number of phone numbers in this scheme, large organizations use private phone extensions that are typically iii to 5 digits long; thus, a unmarried public phone number tin interpret to g phones for an organization using a 3 digit extension. Analogously, Network Address Translation (NAT) allow a single public IP address to support hundreds or even thousands of private IP accost. In the past NAT did non piece of work well with applications such equally VoIP (Voice over IP) and VPN (Virtual Private Network). Nowadays Skype and STUN VoIP applications work well with NAT. Now NAT-T and SSLVPN back up VPN NAT.

If the phone companies decide to promote a new arrangement based on xl decimal digit phone numbers we will need new telephones. At the same fourth dimension nosotros volition need new phone books, much thicker as each phone number is forty characters instead of x, each individual needs a new personal address book, and virtually all the communication and switching equipment and software need to be updated. Similarly, the IPv6 migration involves upgrading all applications, hosts, routers, and DNS infrastructure; as well, moving to IPv6 requires backward compatibility, whatsoever organisation migrating to IPv6 should maintain a complete IPv4 infrastructure.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128128107000078

Disaster Recovery

Kelly C. Bourne , in Application Administrators Handbook, 2014

12.nine.4 Firewalls

Forth with the network infrastructure, firewalls actually need to be in place before applications can be congenital in the DR surround. Setting up and maintaining a firewall isn't an easy chore. If you have to motility the product application to the DR site, then it'due south possible that the firewall will demand to be changed to brand everything work every bit expected. Here are some questions to make sure yous take answers to.

Accept the firewall port openings in the DR environment been set to mimic the Production environment?

Were at that place any firewall issues when the DR plan was last tested?

If changes need to be made who can handle this?

Practice you have their name, e-mail accost, office phone #, and cell number?

Do you have the name of that person'southward backup and supervisor in case the main contact is unavailable?

If changes need to be made, how long does information technology take to make them and for them to become constructive?

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/commodity/pii/B9780123985453000121