The Practicality and Danger of Cross-Site Request Forgery (CSRF) attacks

I usually encounter a discussion with software development teams regarding a Medium risk defect finding – Cross-Site Request Forgery (CSRF). A medium-risk would mean persistent attempts may lead to a compromise. The prerequisite for launching a CSRF is quite complicated. I am always bombarded with so many questions and disputes on why we flag this defect.

The Danger

In a nutshell, CSRF is an attack that leverages trust. Burp Suite (Portswigger) explains it in detail. Imagine a phishing email enticing to click on a link that will redirect you to a website you supposedly wanted to visit. But upon clicking, the link triggers a transaction from a website where you have an active session with values on all parameters preset. Without you knowing and without any credentials stolen, a transaction is successfully executed.

An example simulation is shown below using a DVWA (Damn Vulnerable Web Application) and Burp Suite (proxy tool).

In this scenario, the attacker’s target is to be able to change the password of the account in DVWA by setting values on the parameters on the change password function. The victim does not have to go to that specific page. He/she only needs to have an active session when the phishing link is clicked.

When we said the request to the proxy, a GET request to the server looks like this:

The highlighted portion contains the parameters such as the password_new (New Password) and password_conf (Confirm New Password), and Change (Change Password Action). Assuming that the attacker has an idea on this specific request, he can create a prefilled transaction that he will use in his phishing email.

Burp Suite Pro has a Generate CSRF POC feature that will create the transaction:

You can then put the values in the parameters of the change password function. In this simulation, we changed the password to ‘test.’ The parameters and values are highlighted below:

You can test the Generated POC in the web browser. It will be loaded in the browser, similar to a link that you will include in a phishing email.

When you click the button and send it to the proxy tool, you will see the CSRF POC that we created, and the server with a legitimate request will accept it.

The application will tell you that the password has changed.

When validated during a new session (logout then login), the new credentials are accepted.

That’s how the usual CSRF attack will look like. What are the issues then, and why are there discussions on CSRF attacks’ relevance with the development teams?

The Practicality

In the simulation shown in the first part, there are a lot of assumptions.

  1. The assumption that the user has an active session

We are assuming that the user is currently using the session that we are targeting to exploit. Even if the user clicks the link, but the session is not active, the attack will not be successful.

  1. The assumption that the user will fall for the phishing email and click the link

We are also assuming that we can lure the user into clicking the link.

  1. The assumption that 1 and 2 are happening at the same time. This is self-explanatory. They need to co-exist so that the attack will be successful.

The developers, project managers, or even application owners’ argument is the practicality of the CSRF. As mentioned in the introduction, this attack leverages trust, and there is no single successful formula in exploiting the user’s trust. So with a valid argument from the developers, why are we still flagging it as a defect?

The Judgment 

The rating that is given for each defect depends on its severity. Even if there are so many assumptions and prerequisites before successfully launching the attack, the application is still at risk with CSRF. The probability that it will always happen is still there. On the side of security, we know that it is easy to launch an attack; therefore, we say it is medium. It means that persistent attempts may lead to compromise.

Furthermore, security best practices will tell you that CSRF can be mitigated through the code-level layered defense. One of the best mitigations is deploying an Anti-CSRF token generated by the server that the application will use for validation when a transaction is executed. Burp Suite (Portswigger) has a good article about CSRF Tokens.

Here is a code snippet from a remediated change password function in DVWA. It will show you that there is a token validation to the server to check if the user intentionally made the request. At the end of the code, you will see that a generate token function is called to be used for the next transaction.

Conclusion

CSRF attacks are prevalent in applications, and at the same, it is not easy to exploit. However, the security tester’s role is to exhaust all possible ways of controlling the application and checking if there are ways to improve its security. Thorough CSRF checks are done in the admin panel, where a lot of CRUD actions are done with an escalated privilege (such as admin or super admin). Once the defect is exploited, it is imperative that remediation activities be done in the soonest possible time.

How NMAP scan types complement your Vulnerability Scanner

Usual IT organizations utilize vulnerability scanners to help identify weaknesses in the infrastructure. Outsourced security projects also use vulnerability scanners for the same purpose. While there are voluminous results generated, there are some limitations with the use of the vulnerability scanners. To be fair, vulnerability scanners provide a lot of help in determining more or less the status of your security posture. But in reality, we know that not one security tool can have it all. Those who claim that they have the all-in-one tool may be saying it to make a sale! That’s why it’s important to know which tools can help the other to make better results.

Challenges and Limitations of Vulnerability Scanners

Like anti-malware solutions, the effectiveness of the vulnerability scanners is based on their knowledgebase or signatures. The more knowledge it has on the known vulnerabilities, the better. Usually, that also makes the difference between the free versus commercial scanners. 

False-positive results show and it can be a lot. Getting a lot of results may not mean much if the results are not relevant. False positives are findings that do not apply to the target as if there are Linux vulnerabilities found but the target is a Windows machine. That usually happens when part/s of the signature matches that in a file or directory of a target machine. Manual validation should be conducted afterward to filter out false positives.

Missed ports/services due to speed or lack of identification can be another limitation of vulnerability scanners. Because there are already pre-configured settings in the scanners, a high number port (or dynamic ports in general 49152 to 65535) may be open but the scanner might miss it. 

NMAP scan types can help resolve the third limitation. 

NMAP is one of the common tools that you will use when doing your VAPT activity since it can provide a lot of information about your target from open ports, services, and versions to platforms/OS through fingerprinting. 

You can manage to have a quick or thorough and stealthy scan in NMAP. Based on initial observation, you may realize that there might be a firewall or IDS in between you and your target. It will compel you to modify the scan type and adjust the aggregate timing option. In other words, human intervention can be a vital key in arriving at more accurate results. 

Sample: Metasploitable 2

In this example, I did a sample vulnerability scan using Nessus Essentials (it’s free!) targeting Metasploitable 2 in a VM. Here, you can see that majority of the critical findings are related to OS updates. But in reality, there are open ports that can be easily exploited with higher privileges.

We can do a quick scan using NMAP to get the open ports and cross-reference with our Vulnerability Scan results:

nmap 192.168.33.165

We can do a quick scan using NMAP to get the open ports and cross reference with our Vulnerability Scan results:

nmap 192.168.33.165

So, there are a lot of open ports! What can we do with these results? We can dig deeper and check the service version using the -sV parameter. This scan might take longer to finish so we need to use the settings more effectively. As you can see below, you can see the applications and versions listed in the NMAP results. This is very useful and ‘juicy’ especially when you do your gaining access/exploitation phase.  

nmap -sV 192.168.33.165

You may now cross-reference these results with the findings in your vulnerability scanner. There may be open ports or items that may have been missed. You can also check whether the service version has a corresponding known CVE.

More extensive scan

Another scan that you can do is a complete port scan from 1-65535 without doing a service version check. The goal is to just list down all the open ports because there might be ports that the normal scan may have missed that’s why you have explicitly command NMAP to do so.

nmap -p1-65535 192.168.33.165

Comparing the results of the regular scan and service scan to a complete port scan, we can see more open ports mostly high number ports. At the same time, we can see a low number port that was not initially detected by a regular NMAP scan: 3632/tcp distccd.

You can now explore more about these open ports and check whether there are services open or even using default credentials.

We run another service version scan to determine the services running:

nmap -sV -p3632,8787,36964,38859,45861,48981 192.168.33.165

Conclusion

Human intervention in the scanning phase is important in setting up the stage for the next phases – exploitation and post-exploitation. Determining the scan type and customizing it based on the need is crucial in providing a clearer picture of the attack surface and open opportunities in the target. NMAP is a handy and important tool that can assist you in finding those open opportunities in detail. And definitely, it is a big help in complementing the results of the vulnerability scan.

Bypassing SSL Pinning and Traffic Redirection to Burp Suite using MobSF and Genymotion

In the usual web application security testing, testers take advantage of proxy tools such as Burp Suite or OWASP Zap to tamper with the parameter of HTTP requests to the server and observe the traffic. There are also built-in scanning tools and add-on/plugins that can be integrated for more specific tests. For a web application that uses certificates, the resolution is to add Burp’s certificate to the trusted certificates so the traffic can still pass through the proxy.

However, when doing security tests in mobile apps, this can be a problem. A lot of mobile apps that use certificates implement SSL pinning, thus, it will not connect to the proxy as it doesn’t recognize it as a legitimate connection. Installing Burp’s certificate in the browser will not do any good as the mobile app does not pass through the mobile browser. There are different approaches to resolving this issue. One is to root the OS and install Burp’s certificate in the System Certificate list. By default, Burp’s certificate can only be installed in the User Certificate list if the OS is not rooted. The other approach is to disassemble the .apk file (assuming Android) using apktool and Frida, then disabling the SSL pinning there or referring to Burp’s Certificate as valid.

Depending on the setup, approach 1 or 2 may work. But the steps and tools may be complex as you need to disassemble and assemble the code back again. There are times when you need to do trial and error just to find out which approach or tweaks will work.

MobSF Dynamic Analysis

One of the tools I found is the Mobile Security Framework. It is a security tool that contains both static and dynamic analysis for Android, iOS, and Windows. What I like about the tool is that it automates the disassembling part and analysis of the Manifest and other parts of the code. It also has a risk scoring based on OWASP Mobile Top 10 and CVSS.

One of the more important features is the dynamic analysis. It can execute the uploaded APK to an emulator and execute runtime tests. Note that for dynamic analysis to work, MobSF must be installed in the host and not in a Guest/VM.

Bypassing SSL Pinning

Bypassing SSL Pinning is easy once you have set up the Dynamic Analysis feature of MobSF. Frida is already built-in and you can see the logs. In the example, we uploaded Wikipedia’s APK for static and dynamic analysis. When you start to. There are default settings such as API Monitoring, SSL Pinning Bypass, Root Detection Bypass, and Debugger Bypass.

Go to Frida Live Logs to see the status of the functions implemented. Browse through the mobile app in Genymotion and see the updates in the Frida Live Logs. It will also indicate if SSL Pinning has been bypassed.

Sending the HTTP/S Requests to Burp Suite

After bypassing SSL Pinning, we can now redirect the traffic to a proxy such as Burp Suite. You can go to Generate Report and go to HTTP(S) traffic to verify whether requests and responses are recorded. Once verified, you can go to Start HTTPTools to send the repeat the request to a proxy.

From there you can send the captured traffic to the Fuzzer by setting the IP and port used by the proxy (usually localhost:8080). Just make sure that you have the same set up in the proxy and toggle the Intercept button to “off” and you’re good to go.

Conclusion

These key features of MobSF will help security testers in analyzing the traffic of mobile applications. The tedious task of manually disassembling and assembling the app is resolved and more time can now be allocated to testing the logic and flow of the application.

Effectively Conducting Networking & Cybersecurity Distance Learning Courses

Photo by Julia M Cameron on Pexels.com

I had the privilege of sharing some of my experiences on how I conduct my networking and cybersecurity classes online to other IT Educators in a recent webinar hosted by the Philippine Commission on Higher Education (CHED).

Regardless of the Learning Management System (LMS) used by the school, technical subjects like networking and cybersecurity are different because of the need to have a working laboratory to have a complete grasp of the courses. LMS provides storage, collaboration, insights, engagement automation of quizzes. But creating a laboratory might be challenging.

How do I do it online and offline?

  • Search for useful and informative references – Aside from the references that are included in the syllabus, I also add a lot of helpful links, videos, and PDF files that I find relevant for the course. I try to relate it in every module so that students will be provided help in their studies.
  • Customize slides for the class to meet learning objectives – I create my slides for 2 reasons: First, there is so much information out there that cannot be put in a single material. It is important to choose the most important ones and guide students to relate the materials to the other references. For example. if the topic is network architecture, I’ll just discuss the components of it and provide them references on example implementations/practices for each component. They will be given a list of references that can check but the important thing is that the core concepts are properly discussed. Second, publicly available slides from vendors tend to promote their products. I try to make my materials as vendor-neutral as possible and let the students choose which they prefer. In the end, we don’t like students to be exposed to a single brand and then work on a company that uses another brand.
  • Record short lectures to set the context – I also do video recordings of lectures to help explain technical concepts in a way that students will be able to understand easier. Theories are very important because they will be used in any industry or scenario that you may encounter. Another objective is to provide an industrial touch to the discussion. As an industry practitioner, I know that some of the concepts discussed in books are idealistic but some are impractical in the real world. I try to balance both by providing industry insights and let the students make their analysis.
  • Create Virtual Machines (VMs) that students can use to run the tools and exercises, anytime and anywhere. – An important aspect of technical classes is lab exercises. For networking and cybersecurity classes, I create my own VMs that contain the tools, environment, programs that students will need to practice what they learned in the lectures. I set up a web server in one VM and the attack tools on another. The good thing about it is that students will be able to try various ways of accomplishing the exercises in their own time. And if it fails, then they can just delete the current VM and load a new one in an instant.

Rubrics

I try to make the course very straightforward to students especially the expected outputs and outcomes. Here are the grading components I use for the online classes

  • Learning Log – The learning log is sort of a feedback mechanism from the students. It gives them a venue to speak what is in their mind since not everyone is given time to share in class. Usually, they share their thoughts about the lesson and other things that they observe/experience. They also provide feedback on whether their groupmates are working or if they are having problems with the lesson. The only add-on work for the instructor is to have time to read and respond to these learning logs.
  • Lab Exercise – The lab exercises will validate whether you can use the tools given a specific scenario in a practical sense. The good thing about a lab exercise that is in a VM with a plethora of tools is that there are many ways to achieve the objective. Everything will be based on the strategy of the students.
  • Case Analysis – The usual problem for technical people is that they are tool-centric. They are well-versed on how to use the tools and their features. However, the usual problem is deciding when to use them. The case analysis portion helps students analyze various cases so that they will carefully think about how they will resolve the problem methodically.
  • Exam – Of course, the course will not be complete without an assessment. I usually create an objective multiple-choice exam to check if they know the theories and terms discussed. At the same time, they will also be asked situational questions to check how they will analyze and resolve issues. I try to simulate how IT certification exams work since they will be taking some in the future.

Sample Lesson

On the usual lesson, I start with a question to get their attention and interest. Afterward, there will be a discussion and/or debate. For example, What web application attack is the fastest to exploit and difficult to detect? The answer may vary but what I want to discuss is Session Management. But the question will make the students think and spark sharing, discussion, and debate.

Then I go to the discussion proper. I’ll explain the issues on session management and its best practices when developing a web application etc. Afterward, we do lab exercises and simulate how to check the strength of session ID’s and how to exploit them if found to be weak.

Lastly, we then analyze real-world cases of the organization that has applications with poor session management. We’ll do a root cause analysis and provide recommendations on how to fix the issues.

This is just a sample order of instruction that I find helpful for students in their distance education.

Good Course References

  • Cybrary
  • Peerlyst
  • SANS Reading Room
  • Cisco Networking Academy
  • OWASP

Lesson 10: What are Security Services and Mechanisms?

 

silhouette-photo-of-person-holding-door-knob-792032
Photo Credit: George Becker from https://www.pexels.com/

In the usual scenario, companies are more reactive than proactive with regard to security. Due to the perception that IT, which includes cybersecurity, is a cost center, procuring technologies may not be appealing to management unless a security incident occurs.  In Lesson 6, we discussed the value of the CISO to help align the company’s strategy and the necessary controls in place to ensure protection.

Coming from a technical security background, you would like to have the best tools and software available. But remember, the management sees it as a cost without seeing the return on investment since it’s for internal use.  The inconvenient truth list below will make the technical security personnel understand why sometimes (or maybe most of the time), the tools that we want are not approved.

The Cybersecurity Inconvenient Truths

  • You cannot protect everything from everyone.

If we will list down all potential threats that an organization can face, it will be a very long one. DDoS, Malware, incompetence resulting in loss of data, ransomware, corporate spies, etc. Since the list of threats is very long, it means that there’s a lot of security controls that we have to put in place. Unfortunately, we don’t have everything to prevent or mitigate all these threats.

  • There are not enough resources and money in the world to totally mitigate all risks.

Corollary to the discussion regarding the management’s perception of IT/Cybersecurity, the budget for the team is limited. So if resources are limited, we can only do what we can within the budget. And that leads to the next inconvenient truth.

  • Focus on protecting the most important information first, that which must be protected, and that with the highest risk.

Since we cannot protect everything and we have limited budget, the goal is to prioritize which threats have the highest risk with high severity. In that way, you are able to cover the majority of the security incidents in the organization.

This activity of prioritizing the controls based on the risk-rating is called Risk Assessment. We will have another discussion about it in another lesson.

Security Services and Security Mechanisms

To properly align the organization’s strategy and the cybersecurity team’s goals, we have to define the security services and mechanisms. Security services reflect on how the organization’s objectives are manifested. Security mechanisms, on the other hand, are the specific solutions that we can implement in the organization.

See example below:

We conduct risk assessment first before we can come up with the Security Services and Mechanisms.

  • Goal: The organization wants to focus on physical security
  • Security Services: (1)Personnel security; (2) Access control
  • Security Mechanisms: (1) Security clearance, training, rules of behavior; (2) Biometrics, proximity card, mantraps;

What industry do you think will have this type of security goal?

It can probably be a bank or law enforcement (government) office.

It is important to determine the organization’s security services and mechanisms so that the cybersecurity team will also have a level of expectation on the types of controls and tasks that they will be doing.

So the next time you think about a cybersecurity project, you have to revisit again the defined security services and mechanisms of the team and see if they are aligned with each other. Otherwise, you will have to let it go so you won’t waste your time and effort.

 

 

 

Lesson 9: How a Court Decision Changed Privacy Laws in the World

Privacy as a concept is considered as a subjective phenomenon because of different factors such as culture and beliefs. For example, the Japanese can see each other naked in an onsen which is considered normal to them culturally. However, it is taboo to do the same in the Philippines. It can be considered a breach of the person’s privacy to see other people naked.

On the other hand, part of Filipino culture is hospitality, which to some extent involves caring and oversharing. Some Filipinos tend to ask too personal questions even if they have just met the person. For the Japanese, this may be breaching their personal privacy.

The universally acceptable definition of privacy can be, “Any information that an individual wants to protect from becoming public knowledge.” Do you agree?

There are different philosophical viewpoints of privacy described in Muzamil Riffat’s paper entitled, “Legal Aspects of Privacy and Security: A Case-Study of Apple versus FBI Arguments.” For this article, however, we will only be focusing on one viewpoint, which is the Privacy Right in the United States. 

Fourth Amendment in the US Constitution

The Fourth Amendment primarily focuses on the protection of the people against illegal searches and seizures by the government. The Fourth Amendment states: “The right of the people to be secure in their persons, houses, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized”.

The amendment is intended for government search and seizures only.  The important questions about the provision are: 1) What constitutes an unreasonable action? (Protected from “unreasonable” searches);  and 2) What is probable cause? (Warrant could only be granted if there is a “probable cause.”)

Katz vs. United States (1967)

One of the landmark cases that helped define the scope of the Fourth Amendment in Katz vs. United States in 1967. To summarize the case, the FBI eavesdropped Charles Katz’s phone conversation in a telephone booth upon suspicion that he was giving gambling information to clients in other states.

The question was whether Katz was protected by the Fourth Amendment against the FBI to eavesdrop the conversation in a public phone booth without a search warrant.

The Supreme Court voted 7-1 in favor of Katz. According to Justice Potter Stewart, “The Fourth Amendment protects people, not places.” The court ruling extended the Fourth Amendment protection beyond homes and properties.

Justice John Marshall Harlan II on a concurring opinion, interpreted the law by passing a two-part test: 1) That a person has exhibited an actual expectation of privacy; and 2) That the expectation is one that society is prepared to recognize as “reasonable.”

The “Katz Test” has been used in thousands of cases related to privacy especially related to communication, media, and the use of advanced devices.

Although it was a triumph for Charles Katz, it also opened so many opportunities for criminals to do malicious activities and get protected by the Fourth Amendment later on.

architecture-booth-buildings-bus-374815
Photo Credit: Stock Photo from Pexels.com

Further Reading:

  1. Legal Aspects of Privacy and Security: A Case-Study of Apple versus FBI Arguments, Muzamil Riffat, SANS
  2. Katz v. United States, 389 U.S. 347 (1967), Justia US Supreme Court
  3. Katz v. United States, Oyez
  4. Katz v. United States, Legal Information Institute, Cornell Law School
  5. Katz v. United States, Wikipedia

 

Lesson 8: What are the challenges in responding to cybercrimes?

woman-sitting-on-chair-2157191
Photo by Martin Lopez from Pexels

Cybercrimes are criminal activities punishable by law that are done using a computer or the Internet. It could range from identify theft, vandalism/ defacement of websites, scams or even large-scale Distributed Denial of Service (DDoS).

Sample real-life cybercrimes are listed in the further reading below.

Types of Cybercrimes

Primarily, both as an investigator and responder, you need to be able to determine the type of cybercrime committed. It is important for you to be able to determine the correct response (technical and legal) in the incident.

  1. Computer-assisted crime (source) – Computer is the enabler in the commission of the crime. (Ex. Stealing of credit card information through sniffing or phishing)
  2. Computer-targeted crime (destination) – Computer is the primary target of the crime. (Ex. Denial of Service attacks)
  3. Computer-incidental crime (indirect) – The involvement of the computer is secondary but important to the commission of the crime. (Ex. CHILD pornography is stored on a computer. Emphasis on CHILD since pornography in many places is legal but CHILD pornography is NOT)

Issues on Investigating and Resolving Cybercrimes

For developing countries like the Philippines, the cybersecurity infrastructure of the government in combatting cybercrimes is far from maturity.  But even the developed and well-funded cybersecurity programs of other advanced countries still face issues on investigating cybercrimes. The list below are the significant ones:

  • Difficult to equate physical and logical assets.

The common misconception is that people don’t equate physical money to virtual money simply because the latter is not tangible. The P1 million in a bag is perceived as really one million. But transferring P1 million online is perceived as just sending bits and bytes using a web application. Due to this perception, cybercrimes are not treated as serious as physical crimes.

  • Cyber-law environment has not been fully defined by courts.

For developing countries like the Philippines, cyber-law is not yet fully defined by courts because the basic principles of proving innocence or guilt are different in the cyber world. For instance, the common way of proving innocence is to show proof that you have evidence and witnesses that will show that you are not in the crime scene when the crime happened. However, you can be in the Philippines when you launched an attack in China. It will be hard for lawyers and courts to interrogate further without the proper knowledge in IT.

  • Cybercrime spreads global.

Why do you think that even though there is a law that prohibits Torrent sites (P2P) that share pirated films and software, there are still a lot of Torrent sites online? A lot of countries may have laws against piracy, but there are still a lot of countries that don’t. Due to jurisdiction issues, our government cannot control everything on the Internet especially those that are not hosted in our own country.

  • Cyber laws are highly technical

To explain Denial-of-Service (DoS) attacks, you need to be able to explain the purpose of port numbers, OSI Model, TCP and UDP to name a few.  The technical aspect of cybercrime adds more challenges in making the courts understand how the incident happened. It is not only the technical knowledge that is crucial but also how you are able to explain it in layman’s term, which is the usual problem in the IT industry. (techy but having a hard time explaining it to normal people)

These are some of the issues when investigating cybercrimes. The bottom line is that cybersecurity professionals need to be involved in the legal aspects of creating and implementing cybercrime laws. Lawyers may be good at putting into words how crimes work but they need expert inputs to ensure that all aspects are covered. On another angle, the need for cybersecurity professionals’ involvement shows the demand for the profession in the industry.

Further reading:

  1. Cybercrimes up by 80% in 2018 (Philippine Star, March 2019)
  2. Online child abuse top cybercrime in Philippines (Philippine Start, April 2019)
  3. That Insane, $81M Bangladesh Bank Heist? Here’s What We Know (Wired, May 2016)
  4. Equifax Data Breach Settlement (Federal Trade Commission, January 2020)

 

 

 

Lesson 7: Why HR Policies complement Information Security

The perception of most employees to both the HR and Cybersecurity Department is that they exist so they can look for a mistake and punish you. Some say that HR is the principal’s office, while the Cybersecurity team is the surveillance arm.  Although most of the time during investigations, the cybersecurity team becomes the expert witness to help either acquit or punish the employee. (This will be discussed in another lesson.)

There is truth in perception and claim. To add to that, a lot of the policies in organizational security (discussed in Lesson 6)have overlapped with HR. This means, either the policy was created by HR or both HR and Cybersecurity teams. For example, the Acceptable Use Policy (AUP) outlines the expected behavior of an employee in the organization. A part of the AUP is the Internet Usage Policy (IUP) and other policies related to the acceptable use of the company issued assets such as laptops, mobile phones to name a few. These policies are created and monitored by the cybersecurity team.  The AUP is usually signed by the employee together with his/her job offer/contract prior to onboarding.

Without HR explicitly contributing to information security, they have made an administrative-deterrent control. (Administrative because it’s a policy; deterrent because it discourages people to violate the rules)

Other HR policies that help information security are background checks (administrative-preventive control) because they check whether you are good not only in your CV and mandatory vacations (administrative-detective) to check and audit whether you are doing something not on your job description without you being present in the office.

During operations, HR might have their independent and confidential tasks. But their roles are significant in providing a sound and mature information security environment in the organization.

hr
Photo Credit: “21 Times Michael Scott’s Hatred For Toby Flenderson Was Out Of Control” https://www.buzzfeed.com/chelseabrown/jerkyjerkface

Lesson 6: Organizational Security

Coming from a technical team, organizational security might be seen as a domain that only focuses on paper-based policies (sometimes, just copy-paste templated policies), budgets and risk assessment results. There’s also a gap between highly technical security members who have been doing hands-on security and those management guys who may have their MBAs but whose background is not even in IT. However, it is important to emphasize that technical security needs organizational security to exist and vice versa.

In big organizations, a separate C-level position is appointed for cybersecurity. The Chief Information Security Officer (CISO) is responsible for the over-all cybersecurity operations in the organization. He/she usually reports to the Chief Information Officer (CIO) or Chief Technology Officer (CTO).

What does it mean to have a CISO? 

The CISO will have a seat and say on the management level – may it be ManCom (Management Committee) or ExeCom (Executive Committee). He/She will be able to provide insights and expert opinions regarding the organization’s cybersecurity posture vis-à-vis the organization’s business strategy. It is very important to have a person who can voice out and be heard about what the technical members think is the best security for the organization. Without a CISO, it may be difficult for management to understand expensive spending on cybersecurity tools. You might be defending a multi-million peso Unified Threat Management (UTM) firewall and the management will only see it as an additional cost. Since management does not understand the value of the UTM firewall, they decided instead to provide a budget for a home firewall. It’s also a firewall, but way cheaper!

Also, the CISO is ultimately accountable if any security incident happens in the organization.

The Governance Team

Aside from the  Technical Team, the Governance Team reports to the CISO too but they focus on organizational security. They primarily draft policies that the organization must follow. Some of these policies include but not limited to:

  • Password Policy
  • Time of Day Restrictions
  • Classification of Information
  • Acceptation Use Policy (AUP)
  • Internet Use Policy (IUP)
  • E-mail Usage Policy (EUP)
  • Disposal and Destruction
  • Privacy Policy

They also align with the Technical Team on how they can properly articulate the security requirements in the policy. On the other end, they are also responsible for making sure that these policies are understood by the stakeholders very well.

For more details about policies and policy templates, I highly recommend you visit SANS.org. They have a ton of templates and guides on how to create, modify and implement security policies. Here is the link: https://www.sans.org/security-resources/policies/

CISO CMU
Photo Credit: Structuring the Chief Information Security Officer Organization by Allen, J. et al, Software Engineering Institute (SEI), Carnegie Mellon University

Further Reading:

Structuring the Chief Information Security Officer Organization by Allen, J. et al (2015) Retrieved from:
https://resources.sei.cmu.edu/asset_files/TechnicalNote/2015_004_001_446198.pdf

Lesson 5: Social Engineering

When I studied and took EC-Council’s Certified Ethical Hacker (CEH) in 2013, I learned a very important lesson: even if you follow the hacking methodologies, it only has a 10% success rate. This lesson has, on the other hand, 90% success rate. In gist: Why would you spend a lot of time to brute force a password when you can just ask for it? That’s social engineering.

Social Engineering is an attempt to gain information from a victim or target through manipulation and deceit. The attacker attempts to gain the victim’s trust then exploits the emotions of the latter.

Note: There is a reading I wrote in 2011 that is relevant with this lesson. Copies will be/are given during class.

Why is Social Engineering very successful?

In the past lessons, we studied about Defense in Depth. This means that in every layer of security, there should be protection. Now in Network Security for instance, you may deploy and implement a firewall. The firewall has its limitations but it will strictly enforce whatever rules are written in the ACL. If it says allow web traffic, it will allow web traffic. If it says deny FTP traffic then it will deny FTP traffic.

Problems rise when humans intervene. Let’s say a school enforces a “No ID, No Entry” policy. All students are required to wear their ID upon entering the school. One day, one student forgot to bring his ID but the guard still allowed him to enter because they’re friends. Is it correct for a guard to make exceptions even if there’s an explicit ID policy? What if the said student brought his friends? Will the guard still allow it because they’re friends?

Humans or wetware are the weakest link in the security chain because they simply make a lot of exceptions. That’s why the human vulnerability is a weakness that no patch can perfectly fix.

Ethics: Social Engineering in Penetration Testing

In penetration testing, a third party service provider actively tests the security solutions implemented in the network. Active testing means exploiting discovered weaknesses in security. One of the tests is the social engineering test. In this case, the pen tester tries to bypass security through social engineering.

For example, the company security policy requires the use of a badge/ID to enter the office. The pen tester will carry a lot of heavy things so the guard will help him instead of looking for the ID. The pen tester successfully enters the facility with the guard as accessory to the crime. After the pen testing, the guard is terminated due to abandonment of duty during the test.

It is the job of the pen tester to lure people into breaking the policy. The targets, out of good-will, will help them. But in the end, they will be terminated. Is that ethical?

Steps in Social Engineering

There are three steps in social engineering.

  1. Information Gathering

In this step, the social engineer gathers as many information about his target as possible. He can do online searches in social networking sites, stalk the target to learn his routines and talk to his friends to learn more about his likes.

  1. Developing Relationships

After you have gathered enough information about your target, it’s time to build relationship. Let’s say you learned that the target likes Justin Bieber. You can create a “perfect encounter” with him in his daily routine. You could probably sit beside him in a bus and have a little chitchat about Justin Bieber. Ideally, you can build a relationship with the “serendipitous meeting.” In some cases, you will need to “invest” on something. If you learned that the target is in a lot of debt, aside from being a Justin Bieber fan, you can use that to your advantage for the next step.

  1. Exploitation

In the last step, you push through with your goal of eliciting the information you need from the target. You may have allowed your target to borrow a sum of money from you so that he can pay his debt. Now, you can use that to your advantage. You can ask for the information and remind him that he is in debt so he should return the favor. In this case, you are successful in your mission.

Types of Social Engineering Attacks

The Social Engineering Attacks can be classified into 2 categories:

  1. Non-technical – Doing social engineering in a traditional way
    1. Dumpster diving – Literally checking the target’s garbage.
    2. Shoulder surfing – Glancing at other person’s computer, cellphone or paper.
    3. Impersonation – Pretending to be key personnel in your target’s company.
    4. Tailgating – Walking in the vicinity after the person ahead of you taps his badge to open the access door.
  2. Technical – Doing social engineering using technology
    1. Phishing – Getting target’s information using fake e-mail or website.
    2. Spear phishing – A type of phishing targeting a particular person.
    3. Pharming – A type of phishing targeting a group of people/organization.
    4. Vishing – Deceiving target using telephone/cellphone/smart phone.

—– NOTHING FOLLOWS —–

You can download the PDF version of this lesson here: INFOSEC_L5_SE

Lesson 4: Types of Authentication and Access Control

Authentication

Authentication is defined as proving who you are claiming to be. By default, we have 3 types of authentication:

  1. Something that you know – A form of authentication coming from what you know (residing in the mind)

Ex. Password, pin

  1. Something that you have – A form of authentication that is tangible.

Ex. Token, cellphone, ID

  1. Something that you are – A form of authentication where the uniqueness of the part of your body is used.

Ex. Fingerprint, voice recognition, iris scan

Not one of the authentication types can be considered the strongest. Something that you know authentication such as password can be cracked using brute force or social engineering. Something that you have authentication such as ID’s can be stolen or reproduced. Something that you are authentication such fingerprint is prone to false positives (you have sweaty hands etc.)

To make your authentication stronger, it is advised that you use 2 or more types of authentication to provide a layer of security. This is what we call 2-factor or multi-factor authentication. Examples include:

  1. ATM + Pin (something that you have and you know)
  2. Credit card + signature (something that you have and you know)
  3. Cellphone for One-Time Password (OTP) + password (something that you have and you know)
  4. Badge + biometric (something that you have and you are)

Note: Usernames and passwords are not considered multi-factor because both are something that you know type of authentication.

Questions to search on:

  1. What is the fourth type (or other types) of authentication?
  2. What is the most accurate biometric? Why?

Types of Access Control

Access Control or Authorization determines the type of privilege a user has after being authenticated. If you enter the school, an authentication mechanism could be your school ID. Access Control determines which rooms in the school you can access. If you’re a student, you can access the classrooms, computer laboratories and cafeteria. However, you are prohibited from accessing the faculty room and server room. A faculty member can access more rooms compared to a student.

Mandatory Access Control (MAC)

MAC is the strictest type of access control. This access control can be seen in government especially in military. It uses Sensitivity Labels (SL) both for the subject (initiates an action) and object (waiting for action). It is also known as a multi-level type of access control.

SL can be classified as:

Top Secret

Secret

Confidential

Public

Let’s say a File A (Object) has an SL of Secret. Only the subject that has an SL of either Top Secret or Secret can access the file.

To visualize, let’s say a 5-star General has an SL of Top Secret, Colonel with SL of Secret, Lieutenant with SL of Confidential and Sergeant with SL of Public. Only the Colonel or 5-Star General can access File A because they have clearance to do so because of their SL. A subject can access all objects that are below his/her SL. MAC uses a top-down approach.

Discretionary Access Control (DAC)

DAC is the direct opposite of MAC. In this case, this type of access control can be seen in non-military institutions (commercial use, usually). In DAC, the owner of the file determines the privilege of the subjects to the objects. It is also known as a single-level type of access control.

DAC uses an Access Control Matrix (r-read, w-write, x-execute) shown below:

S (down) O (right) Chicken File

Owner: Riza

Object 1

Pasta File

Owner: Reese

Object 2

Beef File

Owner: Rex

Object 3

James

Subject 1

rwx -wx
Ray

Subject 2

rw- rw- -wx
Ogawa

Subject 3

rwx -wx

In the above scenario, we have 3 users (subjects) trying to access 3 files (objects). Each file is owned by a specific individual (owner). It becomes the discretion of the owner on what privileges he/she wants to give the subjects. These privileges may change also.

Role-based Access Control (RBAC)

RBAC is also known as a non-discretionary access control. It gives privileges based on the roles/tasks. It is beneficial for large organizations in organizing group privileges to objects. For example, all students have read only access to File 1, File 2 and File 3. All faculty members, on the other hand, have full access to all the files mentioned. The admin will just add users (subjects) on the groups created for consistency and convenience.

Rule-based Access Control

Rule-based Access Control basically gives privilege based on a list of an enforced policy. A good example is an Access Control List (ACL) in a firewall. The firewall will grant/deny access based on the rules found in the ACL. However, if no rule is present, then no privilege should be given. (implicit deny)

—– NOTHING FOLLOWS —–

You can download the PDF version of this lesson here: INFOSEC_L4_AuthAC

Lesson 3: Defense in Depth and related concepts

Defense-in-Depth

We have agreed that we protect data/information in Infosec. And as we have discussed in Lesson 1, the scope of Infosec is very broad and IT Security is just part of it. We have also learned in Lesson 2 that preventive controls are incomplete without detective controls and response. With former concepts discussed, a more concrete and concise security architecture is formed- Defense in Depth.

The concept of Defense in Depth states that in order for anybody to access the data, it should pass layers of security first. Security controls may vary but it should be in layers.

For example, if you want to access the bank database, you need to pass through frisking of security guards, inspection of bags and proper identification when entering the bank premises. That is what we call Physical Security.

When you enter the premises, you are required to wear your ID at all times. If you are a visitor, a security personnel is required to accompany you wherever you go within the premises. That is the next layer called the Operational Security.

If you connect to their wireless network and your laptop cannot access the Internet because of MAC filtering, that is an example of Network Security.

When desktop computers have disabled USB ports to prevent spread/download of virus, that is an example of Host Security.

When you need to enter a username and password to gain access to your account, that is an example of Application Security.

Diversity of Defense

The Diversity of Defense security concept is quite tricky. Management will always want a cost-effective IT infrastructure setup. For example, Huawei, a known networking product, might offer an IT infrastructure package that may be very appealing. Let’s say they offer the whole IT infrastructure with X pesos. The management may be lured to buy the package because of the cost. However, as an information security professional, you should weigh the possible security issues that may take in place.

In Diversity of Defense, you are compelled to buy different brands of network and IT devices such as firewall, switch, router, etc. But assuming you plan to buy different types of devices, the cost may double (2X pesos) compared to the X pesos if you have a single brand.

So what is the advantage of this concept?

If a vulnerability in Huawei firewall is found, no matter how many Huawei firewalls you have, then your network is vulnerable to that particular attack. You can simply say that the cost of information disclosure is way more expensive than the implementation of diversity of defense when a single proprietary vulnerability is exploited.

Security through Obscurity

If we say that a company is implementing security through obscurity, can we consider it secured? In Security through Obscurity, we rely on the idea that nobody will think that some valuable asset is hidden in an obscure place.

For example, will anybody think that there’s 1M pesos stored underneath the driver’s seat of my car? What are the odds, right? But if I accidentally left my car unlock and somebody randomly opens the door of my car, is my asset still secured?

Security through obscurity is simply hiding something. But hiding something without proper safeguards has no security at all.

Cost-Benefit Analysis (CBA)

In information security terms, CBA refers to the weighing of the cost of safeguards to the value of asset. As a rule of thumb, you are not supposed to buy a safeguard that is more expensive than the asset.

For example, you won’t buy a vault that is valued at 20,000 pesos to safeguard a Timex watch from a buy 1 take 1 sale worth 2,000 pesos. The thief will probably steal the safeguard instead of the asset in it.

—– NOTHING FOLLOWS —–

You can download the PDF version of this lesson here: INFOSEC_L3_GenSec