Layakk https://www.layakk.com/ Thu, 04 Feb 2021 11:42:33 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.1 https://www.layakk.com/wp-content/uploads/2020/06/cropped-layakk_favicon-32x32.png Layakk https://www.layakk.com/ 32 32 PRACTICAL EXAMPLES WITH FRIDAFrida VS Anti-Debug Techniques on Windows (I) https://www.layakk.com/en/practical-examples-with-fridafrida-vs-anti-debug-techniques-on-windows-i/ Thu, 04 Feb 2021 11:42:08 +0000 https://www.layakk.com/?p=3581 In this series of entries we are going to show practical examples of how to use Frida to bypass anti-debug techniques that some applications implement. This series begins with a short description of what Frida is, presenting the environment we will use for the examples we show later; followed by a description of anti-debug techniques, firstly in general terms and later detailing a couple of them; and the series will end with a detailed exposition of some techniques and how we can bypass them using Frida.
Without further ado, let’s start.

Logo Frida

What is Frida?

Using the same definition as in the official webpage (https://frida.re/), Frida is “a dynamic code instrumentation toolkit”. In other words, it’s a set of tools that allow for the instrumentation of code, giving us some APIs that enable the interception, analysis and modification of parts of the code of an application for Windows, macOS, GNU/Linux, iOS, Android and QNX during its execution. In essence, Frida allows the manipulation, at run time, of what is going to be executed just before it is executed.

Easier to understand with an easy example. We start with a simple C++ program that uses a function to add 2 values and returns the result. The function that we want to manipulate is declared as Add(int,int). First, we will change one of the int parameters, and, secondly, the returned result. Once we have the code compiled, we need to locate the offset of function Add through analysis of the code of our executable. In this case, we use IDA Pro to analyze the code, but any other method or application that would allow us to analyze the executable’s code could be used. We identify the function at offset 0x00401000, and we know that the executable’s base address is 0x00400000, therefore, the offset of the function is 0x00001000.

Then, we can intercept the function call and modify its behavior using Frida. We will develop a little Javascript script to do it. First, we need to identify where the program is loaded on execution, its base address, and, then, we can locate the offset of the Add function by adding this base address and the offset obtained before (base + 0x00001000). Now, we can use Interceptor with the function address to add some code before and/or after the excution of the function. We do it all using Frida Javascript API (https://frida.re/docs/javascript-api/).

Thus, when we execute the application with ‘1’ and ‘2’ as arguments, we expect ‘1 + 2 = 3′ as result. However, if we uncoment the first commented line ( args[0] = ptr(‘100’); ) we replace the value of variable op1 with 100 before the function execution, getting a result of ‘1 + 2 = 102’. On the other hand, if we uncoment the second commented line ( retval.replace(‘3210’) ), we replace the return value after function Add is executed but before the result is returned, getting ’1 + 2 = 3210’.

ejecucion_frida

Anti-Debug Techniques Introduction

Once we have seen what Frida is in general terms, we are going to take a look at what anti-debug techniques are and how we are going to classify them in successive posts of this series. Anti-debug techniques are mechanisms that software can implement to try to detect if it is being run under debugger supervision. Debuggers allow us to analyze code dynamically, establish breakpoints, modify and analyze memory sections, etc. Using these techniques, an application can avoid this inspection, difficulting its understanding to a reverse engineer. In forthcoming publications of this series we will talk about anti-debug techniques thoroughly and we will divide them into different groups, depending on the detection mechanism they use.

Techniques based on system calls

We consider in this group the techniques that use functions of the Windows API to get information relative to the debugger presence. There are a lot of functions that can be used with this objective: from functions like IsDebuggerPresent, that returns a boolean value that depends on whether the application is being debugged (True) or it isn’t (False), to functions like FindWindow, that tells us if a window with the name of a well-known debugger (IDA, Ollydbg, Inmunity Debugger, etc.) is present.

Techniques based on memory checks

Methods where the application does explicit verification of some flags in memory that reveal if the process is being debugged or not. Some flags that can be used with this purpose are IsDebugged Flag, Heap Flag or NTGlobalFlag. These flags are members of structures that Windows maintains for each process with information about them. In future posts we will go into detail on this structures.

Techniques based on time

This group includes the methods that use calculations related with time to determine if a process is being debugged. When a process is being debugged, it takes more time to do the same set of instructions than if it isn’t being debugged. This time difference is usually significant. For this reason, the application could check the time at the beginning and at the end of the execution of a set of instructions, and, if it spends more time than a established threshold, it could determine, with high probability, that the process was being debugged.

Techniques based on exceptions

Finally, we group here the set of methods based on triggering exceptions to identify if the process is being debugged or not. There are differences in how a system handles exceptions when a process is being debugged and when it is not. A program can take advantage of this fact to determine whether a debugger is attached or not.

Preparing our testing environment

For implementing our setup, we will use a Windows 10 virtual machine, where we will initially install Python 3.8.6rc1.

Then we install Frida, which can be downloaded directly from GitHub (https://github.com/frida/frida) or installed using the Python pip tool. We use pip because it’s easier than other methods.

version_frida

We also install Visual Studio Community 2019 to develop the example programs, where we implement some anti-debug techniques to show how it works. These programs will be used to test different methods to bypass those anti-debug techniques. There are different ways to use Frida: we can directly use the executable included in the package (each executable has specific functionality) or use the Python module also included in the package to develop our own interface.
We have chosen the second option, developing a little interface that allow us to spawn new processes or attach to existing ones, injecting one or more scripts that provide some functionality. We choose this way because we want to be able to customize the interface depending on specific requirements that we will encounter in the examples detailed in later posts.

]]>
EUCC in 5 minutes https://www.layakk.com/en/eucc-in-5-minutes/ Wed, 23 Dec 2020 13:43:35 +0000 https://www.layakk.com/?p=3219 EUCC (from “European Union” and “Common Criteria”) is the first european security certification scheme for ICT products being defined under the umbrella of the CyberSecurity Act (CSA), as we mentioned in our previous article dedicated to the CSA.

Maintaining our goal to make things as simple as possible for our clients in their efforts to certify their products, we provide here a brief and concise description of the main characteristics of this new certification scheme.

Currently (December 2020) the first draft of the EUCC is available and it is expected that the final version be approved during 2021.

EUCC is based on Common Criteria (ISO/IEC 15408 & ISO/IEC 18045) and it is destined to replace the current national certification schemes also based on Common Criteria (ENECSTI in Spain), which are currently operating under the mutual recognition agreement SOG-IS MRA.

Its scope will be the security certification of ITC products that do not belong to any other specific scheme (in the near future it is expected to have specific schemes for particular technologies or markets, like IoT, cloud services, or mobile communications).

NOTE: Since this is still a draft version, the characteristics described below might be modified prior to its final approval, so for the time being they can not be taken as completely certain.

Assurance Levels

EUCC offers the two highest assurance levels defined in CSA: “substantial” and “high”. Level “basic” has been left out of the scope of EUCC, to be covered by other future certification schemes with lesser security requirements. The assurance level is assigned based on the assurance level selected in the AVA (Vulnerability Assessment) class of Common Criteria: AVA_VAN.1 and AVA_VAN.2 are considered of level “substantial”, while AVA_VAN.3 a AVA_VAN.5 are considered level “high”.

Certification Bodies

Certificates will be issued by certification bodies that will need to be accredited (ISO/IEC 17065), but that might be different from the national cybersecurity certification authority of each country. Nevertheless, certificates of assurance level “high” will have to be issued by the corresponding national cybersecurity certification authority, or certifiction bodies authorized by them.

Laboratories (IT Security Evaluation Facilities, ITSEF)

The evaluation of the security of the products will be conducted by accredited (ISO/IEC 17025) laboratories, which may be internal or external to the corresponding certification body. This particular aspect is not different from the current certification schemes.

Maintenance

During the lifetime of their certificate, the products will be subject to a maintenance process in response to changes that might affect its certification status. Maintenance activities will include revision and decision making by the certification body and, when necessary, evaluation by the laboratory.

Vulnerability Management

EUCC mandates that all vulnerabilities that might appear during the lifetime of the certificate be managed according to an adapted version of the following standards:

ISO/IEC 30111 : Information technology — Security techniques — Vulnerability handling processes
ISO/IEC 29147 : Information technology — Security techniques — Vulnerability disclosure

Patch Management

EUCC includes the possibility that the vendor includes a patch management mechanisms to be analized during the certification of the product. In this manner, in the future they will be able to follow that mechanism to keep their product always patched against potential new vulnerabilities that might be detected, maintaining the certification status of their product.

Transition Period

EUCC recommends a transition period of 2 years between the date when EUCC becomes active and the date when the current schemes based on the SOGIS agreement become inactive, thus ensuring the no interruption of the service. During this transition period, vendors will need to familiarize themselves with, and adopt, the new requirements imposed by EUCC (compulsory maintenance, vulnerability management and patch management). Laboratories and certification bodies will also need to use that transition period to adapt their operation to the new scheme.

As said before, it is still a draft version, so some of the described characteristics could still suffer modifications before its final approval, but the draft is considered to be quite mature and no big changes are expected. Therefore, we recommend vendors to start familiarizing themselves with the new requirements that EUCC will impose for the certification of their products.

 

]]>
CyberSecurity Act in 5 minutes https://www.layakk.com/en/cybersecurity-act-in-5-minutes/ Mon, 16 Nov 2020 11:39:48 +0000 https://www.layakk.com/?p=2944  

Understand the new European regulation on security certification

Our Product Security Evaluation Laboratory (accredited in both Common Criteria and LINCE methodologies) always tries to take care of the complexity of the certification processes, so that it becomes a much simpler task for our customers. With that approach, we will explain in this post what the  European CyberSecurity Act (CSA) is and what implications it has on the Security Certification schemes.

Why is it necessary to certify product security?

It is a need to regulate, in a formal way, product evaulation processes, so that a product’s certification means something measurable and reproducible with respect to the security capabilities of the product holding the certificate. This regulation framework has been named certification scheme.

Previous situation

Previously to the CSA, the countries had (and still have so far) their own local certification schemes (in Spain the ENECSTI, driven by the CCN’s Certification Authority). Whithin this framework, the Laboratories conducted the product evaluations and the CCN’s Certification Authority was the only organization with attributions to issue a certificate. The need of recognizing the validity of a certificate accross countries was satisfied by the settlement of different agreements, SO-GIS and CCRA being the most relevant.

The CyberSecurity Act (CSA)

The CSA is a legal framework that regulates and unifies all security certification processes for all european countries.

Its main key points are:

  • ENISA (European Agency for Cyber Security) has been appointed as the organization in charge of developing and deploying this regulation, as well as to write the security certification schemes accordingly to what is established in the CSA.
  • Certification schemes: as it is not possible to globally (i.e: for all products and services) define the cybersecurity needs, requirements and objetives, specific schemes have to be defined for each group of products or processes that share the same peculiarities regarding security, always in compliance with the general framework defined in the CSA:

Up to now, the candidate schemes, ordered by deployment maturity, are:

    • EUCC (Common Criteria based European cybersecurity certification scheme): it is the first defined scheme and it will be the successor of local Common Criteria schemes, operating under SO-GIS agreement. The definition of this scheme is the most mature, so it will be worth to dedicate a full post to this matter in our blog in the near future.
    • Cloud Services: this scheme is still beeing defined as of the date of writing; it will regulate the certificaction of services provided in the cloud.
    • Other schemes: other schemes are under construction, and they will be deployed in the short/middle term: ICSS, 5G, etc.
  • New stakeholders: each scheme may determine that some certificates, depending on the assurance level, may be issued by private entities (typically evaluation facilities that have extended their accreditation accordingly). It is even possible that some schemes will allow auto-evaluation of security features, performed by the vendor itself.

Transition

It is expected that the first scheme to come into force will be EUCC, during the first half of 2021. Most probably, local schemes will mantain its presence during a co-existence period. The new role of current certification authorities is currently in the process of definition; presumably their responsibilities will evolve to the coordination, regulation and control of the organizations allowed to issue certifications as well as the certification of products with higher level of assurance.

Regarding vendors, the transition will be very smooth: it is foreseeable that certificates prior to CSA’s activation will retain its validity. Whether you need an immediate certification for your product or you are planning to certify it for the next year, Layakk is prepared to offer you Evaluation services and, depending on the assurance level, also Certification services. Our Laboratory services always comply with the principles of simplicity, high quality, honesty and best price.

]]>
New image, same personality https://www.layakk.com/en/new-image-same-personality/ Fri, 16 Oct 2020 10:56:19 +0000 https://www.layakk.com/?p=3121 During the last two years Layakk has experienced a transformation, from being a company purely centered on technical cybersecurity services, to also become a Laboratory officially accredited to perform security evaluations of IT products, conforming to Common Criteria (CC) and LINCE  methodologies.

Some may be surprised to learn that we have embraced formal methodologies for security evaluations of IT products, and they may think that we have crossed over to the dark side, and that we have abandoned hacking, but nothing could be further from the truth: we are not abandoning the red team activities, nor the research, nor the evaluation of the security of products using our own methodology, instead, we are extending their scope and applying them also to formal security evaluations. Also, in our opinion, formal methodologies make a lot of sense when the goal es to officially certify the security of a product, because they provide measurability, comparability and reproducibility of the actions performed on it during an evaluation.

This transformation has been a challenge, but we are very happy to have tackled it, and very satisfied with the results so far. Also, we are totally ready and excited to implement the transformations that the CyberSecurity Act will impose in the imminent future in the security certification of IT products arena all across Europe.

Because of this evolution, we dediced it was time to also evolve our corporate image, although without any intention of breaking with our past, of which we are very proud.

The redesign was carried out by Socarrat and was based on maintaining the essence of the company but reflecting its new capacities in a much more current format.

The new log incorporates a new sail, becomes bicolor, updates its typography, and adds a subtitle that reflects the area of expertise of the company:

We hope you, our clients, who are the reason for our existence, like it as much as we do.

Best regards from the whole Layakk team.

]]>
Shell script “libraries” https://www.layakk.com/en/shell-script-libraries-en/ https://www.layakk.com/en/shell-script-libraries-en/#respond Fri, 19 Dec 2014 10:30:05 +0000 https://www.layakk.com/?p=2391  

If you are a security profesional (or an IT profesional) probably you -like us- are constantly writing shell scripts, so that you can automate certain tasks in your linux (or unix) environment.

We don’t usually use shell scripting to write complex applications (although some shell scripts become quite big), but we do use it extensively to create some “utilities” or little tools to quickly fulfill certain needs that arise along the way.

This happens to us all the time when doing pentesting. Very often, we have to write a shell script very quickly just to solve a particular problem, so we write it as fast as possible, without regard to any software design aspect. When you do this, you know that that is not right way to write programs, but you accept it because you think the extra work that would entail doing it well is not worth it, and you prefer to have a quick working result over a well designed code.

An obvious consequence is that you end up writing the same piece of code again and again. One of the most infamous examples that applies to our case is the argument parsing function: we cannot count the number of times we have written a function to handle script options and arguments and display usage help in a way that is reasonably comfortable for us.

During the latest few months, we have been working on a job that has required us to write (and use) many shell scripts, and this time, since we suspected in advance that that would be the case, we decided to take a -let’s say- cleaner approach: we decided to write what we call “shell script libraries”, which turned out to be a big help for us with the aforementioned situation.

These “shell script libraries” are sets of shell functions that you can import and use from within your shell scripting code, and some of the functions can be useful even if invoked directly from the shell command line.

In this article we present the following shell libraries:

  • lk_option_parser.sh
  • lk_math.sh
  • lk_net.sh

lk_option_parser.sh

We started out by writing an option parser library. If your shell script needs to be able to behave in different ways depending on its invocation or if you need to pass information to it, you usually achieve this through the use of options and/or arguments. We liked the way this is handled in libraries that you can find in C or python languages, so we tried to write something similar. The library that we have written is intented to be generic and easy to use.

Note: Perhaps there is something similar out there, but none of the code we found and tested happened to match exactly what we were looking for

To use the library you have to download it and put it in a directory that is in your PATH environmental variable (or in the same directory as the invoking shell script).

Then, source it from within your code, for example as follows:

. lk_option_parser.sh || exit 1

Then, call add_program_option as many times as options you have to handle, in this way:

Note: In this context we use option and argument as synonyms; see considerations below

add_program_option “-h” “–help” “Shows this help.” “NO” “NO”

where:-h is the short flag of the option
–help is the long flag of the option
“Shows this help” is the explanation that will appear when the usage is shown
“NO” means that this is not a mandatory option
“NO” means that this option doesn’t have an associated value

After you have all your options added, you just call:

parse_program_options $@

And then you may call:

show_program_usage “-h” && exit 0

Which will test if “-h” (or “–help”) is present and, in that case, will show program usage and then exit. You can also specify no arguments to show_program_usage in which case no test will be performed.

If latter in your code you want to know if an option is present, you can do it like this:

if is_option_present “-h”
then
 …
fi

And if you want to get the value for a specific option, you can do it in this way:

_myvar=`get_option_value “-h”`

_myvar will take the value associated to the option. A value is everything between the option and the next short or long option, or the end of the command line. Obviously in this example _myvar will simply be assigned an empty string.

That’s _almost_ everything you need to know to use the library! In the code comments you have deeper explanation of the functions, although you probably won’t need it.

Let us add just a couple of considerations we think you should be aware of if you are considering using the library:

  • The library is written for bash, because that is the shell interpreter that we use, and we haven’t tested it on other interpreters. Perhaps it could be re-written in a more universal way, but we have no plans to move in that direction because, at least for now, bash is enough for us.
  • We know there is much discussion about the right terminology regarding arguments, options and parameters. Please note that, arbitrarily, we decided to use the terms “option”, “argument” and “parameter” as synonyms in the context of our shell scripting libraries, and we, also arbitrarily, decided that all options would always include an explicit switch (e.g: “-h”, “–help”), some of them with an associated value (e.g: “-i INTERFACE”) and some without (e.g: “-h” for help or “-v” for verbose), and finally, we also decided that each option will be either mandatory (its presence will be required) or optional. Please note that therefore, in this context, “option” does not mean “optional” 🙂

The lk_option_parser.sh library worked so well for us that we decided to take the same approach to tackle other problems, and so we started two more libraries that are described in the following sections. They are far from being complete, but our idea is to continue expanding them, and any new libraries we may find interesting to create, with ever growing functionality.


lk_math.sh

lk_math.sh is a library that will contain mathematical utilities. At the present moment, it just includes the following functions:

  • get_random_uint
  • get_random_hex_digits
  • hex2dec

The following is an example of use:

jl:~ root # . lk_math.sh
jl:~ root # get_random_uint 0 -1
jl:~ root # get_random_uint 0 10
2
jl:~ root # get_random_uint 0 10
8
jl:~ root # get_random_uint 200 100000
94970
jl:~ root # get_random_uint 200 100000
46624
jl:~ root # get_random_uint 200 1000000
394239
jl:~ root # get_random_uint 200 1000000
525972
jl:~ root #
jl:~ root # get_random_hex_digits
4
jl:~ root # get_random_hex_digits 20
2BAB96D82D9D7BBE0429
jl:~ root # get_random_hex_digits 20
2E7F41F8F6EB098A078E
jl:~ root #
jl:~ root # hex2dec x
0
jl:~ root # hex2dec
jl:~ root # hex2dec FA
250
jl:~ root # hex2dec 10
16


lk_net.sh

lk_net.sh is a library that will contain networking related utilities. At this moment it just includes the following functions:

  • is_mac_address
  • generate_rand_mac

Here are some usage examples:

jl:~ root # . lk_net.sh
jl:~ root #
jl:~ root # is_mac_address “This is not a MAC”; echo $?
1
jl:~ root # is_mac_address “XX:XX:XX:XX:XX:XX”; echo $?
1
jl:~ root # is_mac_address “0A:1B:2C:3D:4E:5X”; echo $?
1
jl:~ root # is_mac_address “0A:1B:2C:3D:4E:5F”; echo $?
0
jl:~ root #
jl:~ root # generate_rand_mac
BE:9B:FD
jl:~ root # generate_rand_mac FULL
60:AD:CA:70:C5:D4


Conclusion and future work

We found these small shell libraries to be really useful for us, and so we thought we would share them. We hope you find it useful. You are free to use them in almost any way you see fit, since we are publishing them under the GPLv3 license.

Obviously, the code can be improved and expanded, and while we will certainly do so, we would also be more than happy to get your comments and contributions, which we would study and eventually include in the code, giving you the appropriate credit, of course.

]]>
https://www.layakk.com/en/shell-script-libraries-en/feed/ 0
Book: Mobile communications Hacking and Security – SECOND EDITION https://www.layakk.com/en/book-mobile-communications-hacking-and-security-second-edition-en/ https://www.layakk.com/en/book-mobile-communications-hacking-and-security-second-edition-en/#respond Thu, 29 May 2014 18:25:02 +0000 https://www.layakk.com/?p=2394 Back in november 2011 we published our first book about mobile communications security… After more than a thousand units sold, we are proud to announce that the second edition of the boook is available.

During this two and a half years, like other researchers, we have maintained our activity in this field. The aim of this second edition of the book is to collect and synthesise most part of this information. So, what has changed during this period and has been added to this second edition?

In the 2G field, new inexpensive technologies have arised, allowing anyone to perform most practical published attacks. New attacks have also been published: denial of service, subscriber impersonation and geolocation of subscribers, among others.

We have also expanded both the theoretical study of 3G protocols and attack techinques not covered in the first edition, including the ones that we explained in RootedCON 2014.

Also, a first approximation to the study of the security of the 4G protocols, including a review of the state-of-the-art around 4G attacks, has been added.

The index of the book is available here and you can get it through the publishing house 0xWord.

]]>
https://www.layakk.com/en/book-mobile-communications-hacking-and-security-second-edition-en/feed/ 0