Feedback von (persönlichem) Reviewer eingearbeitet

- Satzvereinfachung
- Restrukturierung: EUFI Kritik nach hinten
- Typos, Stilfragen
master submitted
Trolli Schmittlauch 2018-09-28 01:00:38 +02:00
parent c709f6e1ac
commit c365d07747
2 changed files with 15 additions and 11 deletions

BIN
main.pdf

Binary file not shown.

View File

@ -114,7 +114,7 @@ Additionally, international agreements based on the \ac{WIPO} Copyright Treaty \
So right now we are in a situation, where all major publishers of e.g. video\footnote{all Hollywood film publishers require adhering to certain protection standards like \cite{movielabsinc.MovieLabsSpecificationEnhanced2018}} and audio\footnote{although legal music file purchases are mostly \ac{DRM}-free, the consumption model of streaming has brought back DRM to platforms like Spotify or Deezer} content, and video games\footnote{Valve's Steam platform integrates its own DRM into sold games; other publishers and gaming consoles have their own DRM systems as well} require the usage of \ac{DRM} for protecting their published works. Thus right now, all software using this content has to be proprietary and whole platforms are being locked down more and more. This development is most apparent in non-PC platforms like mobile devices, where unlocking the bootloader (if even possible) results in deletion of \ac{DRM} keys of the device. \cite{sonydeveloperworldUnlockBootloaderOpen}
But in recent years modern CPU architectures have introduced special hardware-backed \acp{TEE} to provide a secured environment for security-critical code to be executed in isolation from the main \textit{untrusted} \ac{OS}. Can these TEEs and other special hardware trust-anchors provide the possibility to present \ac{DRM}-secured content in an otherwise open and open source system, not having to lock it down completely?
But in recent years modern CPU architectures have introduced special hardware-backed \acp{TEE} to provide a secured environment for security-critical code to be executed in isolation from the main \textit{untrusted} \ac{OS}. Can these \acp{TEE}s and other special hardware trust-anchors provide the possibility to present \ac{DRM}-secured content in an otherwise open and open source system? Or does \ac{DRM} only work on completely locked-down systems?
In \ref{sec:background} we first give an overview of common technologies used for providing trust anchors for running systems or isolating software into a trusted environment. Afterwards, \ref{sec:hw_DRM} covers existing DRM architectures already utilizing \acp{TEE}. Aiming for the usage on systems as open as possible, \ref{sec:shortcomings} looks at the security of the presented architectures on such systems and at last takes a look at other problematic effects of DRM usage.
@ -128,10 +128,9 @@ Afterwards we cover the technologies dedicated to providing a \ac{TEE} in modern
\subsection[SecureBoot]{UEFI Secure Boot}\label{sec:SecureBoot}
\textit{Secure Boot} is a functionality of the \ac{UEFI} boot firmware component \cite{unifiedefiforuminc.UEFISpecificationVersion2017} to allow only the launch of authenticated boot images. To achieve that, boot images can be signed with X.509 certificates. Only if the image verifies correctly against a key stored in non-volatile firmware memory or against an entry in an explicit allow list of signatures it is launched by the firmware. This first check on which bootloader or \ac{OS} image to launch can be the anchor of a trust chain, if each consecutive execution step also checks the authenticity of software to be launched. The allow and deny lists can be updated from the running \ac{OS} and deploying own custom platform keys for verification can be possible through setup-mode of \ac{UEFI}.
\textit{Secure Boot} is a functionality of the \ac{UEFI} boot firmware component \cite{unifiedefiforuminc.UEFISpecificationVersion2017} to allow only the launch of authenticated boot images. To achieve that, boot images can be signed with X.509 certificates. Only if the image verifies correctly against a key stored in non-volatile firmware memory or against an entry in an explicit allow list of signatures, it is launched by the firmware. This first check on which bootloader or \ac{OS} image to launch can be the anchor of a trust chain, if each consecutive execution step also checks the authenticity of software to be launched. The allow and deny lists can be updated from the running \ac{OS} and deploying own custom platform keys for verification can be possible through setup-mode of \ac{UEFI}.
Firmwares adhering to the \ac{UEFI} standard are the dominant system software for the PC platform (including x86 devices like servers, laptops and mobile devices, but also more and more devices with ARM chipsets). Microsoft has been criticized for requiring devices \cite{MicrosoftHardwareCertification2014} shipped with the Windows \ac{OS} to have Secure Boot enabled by default, verifying boot images against Microsoft's key, mentioning that this could lead to lock-down of consumer hardware and the inability to install alternative operating systems on it. \\
Earlier versions of the requirements mandated an option to disable Secure Boot on x86 devices while also mandating that it must always be enabled on ARM-based devices. \cite{moodyMicrosoftBlockingLinux} Nevertheless complete lockout of other operating systems does not seem to be imminent at least on x86 devices as there is a shim bootloader signed by Microsoft \cite{matthewgarrettAnnouncingShimReview}, enabling the launch of other unsigned boot images. The signature of this specific binary could though be put on the firmware's deny list through updates.
Firmwares adhering to the \ac{UEFI} standard are the dominant system software for the PC platform (including x86 devices like servers, laptops and mobile devices, but also more and more devices with ARM chipsets).
\subsection{TPM}\label{sec:TPM}
@ -139,7 +138,7 @@ Earlier versions of the requirements mandated an option to disable Secure Boot o
\acp{TPM} provide \textbf{secure storage} inside the module to store sensitive data like cryptographic keys in such a way that it can not be leaked to or stolen from processes running on the main CPU. \\
These stored keys can be used to execute cryptographic operations on behalf of the system, without the keys getting transferred to the outside of the module. \\
The \textbf{attestation} functionality can attest the identity of software by calculating a hash sum of the supplied binary, signing it with its internal key to prove a certain system configuration or software version to third parties having a corresponding verification key. When pre-deployed with a unique key, all previously mentioned functionality can be combined to provide a \textbf{unique machine identity} and form a hardware trust anchor for running systems. \\
A \textbf{continuous trust chain} can be built from the boot-up on if all software components, starting from the firmware on, let the \ac{TPM} attest the processes to be launched and compare these attestations with the ones stored previously in the module's secure storage. A similar boot policy is \textit{authenticated booting}, where the \ac{TPM} calculates the checksum of each boot stage but does not enforce any signature checks and only stores the results inside its secure internal registers, from where the system status can be queried later.\cite{hartigLateralThinkingTrustworthy2017} In contrast to UEFI Secure Boot (\ref{sec:SecureBoot}), it is also possible to securely attest and launch code using special \textit{late-launch} CPU instructions introduced into AMD and Intel chipsets without providing a continuous trust chain from the firmware on \cite{hartigLateralThinkingTrustworthy2017}. \\
A \textbf{continuous trust chain} can be built from the boot-up on if all software components, starting from the firmware on, let the \ac{TPM} attest the processes to be launched. Based on reference signatures stored in the module's secure storage, each component can then decide to allow or refuse the next software to launch. A similar boot policy this attestation-checking during boot is \textbf{authenticated booting}. Again the \ac{TPM} calculates the checksum of each boot stage but does not enforce any signature checks and only stores the results inside its secure internal registers, from where the system status can be queried later.\cite{hartigLateralThinkingTrustworthy2017} In contrast to UEFI Secure Boot (\ref{sec:SecureBoot}), it is also possible to securely attest and launch code using special \textit{late-launch} CPU instructions introduced into AMD and Intel chipsets without providing a continuous trust chain from the firmware on \cite{hartigLateralThinkingTrustworthy2017}. \\
Additionally, \acp{TPM} provide important support functionality for cryptographic algorithms like \textbf{secure counters}, a \textbf{secure clock} for peripherals and a \textbf{secure source of entropy}. \cite{197213}
\subsection{Trusted Execution Environments}
@ -166,7 +165,7 @@ Although Apple's own \acp{SoC} use the ARM architecture, the company has decided
Intel's \acf{SGX} are a method to launch multiple trusted components into their own fully isolated \textit{enclaves} and thus, according to \cite{hartigLateralThinkingTrustworthy2017}, can be seen as a more advanced version of the late-launch approach (\ref{sec:TPM}).
Enclaves can be scheduled by the OS like normal processes, but code and memory of enclaves are only visible from the inside. Enclave memory resides in the common system's DRAM but is transparently encrypted. \\
Additionally, \ac{SGX} includes attestation of code running within an enclave similar to \ac{TPM}, but does not provide other \ac{TPM} features like secure storage. \\
Additionally, \ac{SGX} includes attestation of code running within an enclave similar to \ac{TPM}, but does not provide other \ac{TPM} features like secure non-volatile storage. \\
As only userland code running in ring 3 can be run inside enclaves, relying on the \ac{OS} for scheduling and resource management, \ac{SGX} is vulnerable to side-channel attacks \cite{peinadom.ControlledChannelAttacksDeterministic2015}.
@ -180,7 +179,7 @@ The Android platform provides its own DRM framework\cite{AndroidDRMFramework}, p
The level of protection provided by the DRM plug-in varies depending on the plug-in itself and on the capabilities of the hardware platform. Plug-ins may rely on secure boot for a verified chain of trust from the firmware level on, use protected output mechanisms provided by the hardware platform and even run the programs inside a \ac{TEE}.
Plug-ins are automatically loaded when they are placed into the \texttt{/system/lib/drm/plug-ins/native/} directory. \\
One issue we see here is that there is no mention of authenticity checking of plug-ins in the documentatiion. \cite{AndroidDRMFramework} This can enable DRM plug-ins to claim to be able to decrypt a certain stream and thus at least result in the user not being able to decrypt their media with the proper add-on. Additionally it might also open up attack vectors for the communication with a license server, as shown later. \\
One issue we see here is that there is no mention of authenticity checking of plug-ins in the documentation. \cite{AndroidDRMFramework} This can enable DRM plug-ins to claim to be able to decrypt a certain stream and thus at least result in the user not being able to decrypt their media with the proper add-on. Additionally it might also open up attack vectors for the communication with a license server, as shown later. \\
One cross-platform DRM plugin solution is \textbf{WiDevine} \cite{googleWidevineDRMArchitecture2017}, currently owned by Google. It provides native solutions for the Android, iOS and HTML5 platform. Thus the \ac{DRM} decryption process is very similar to the one specified in HTML5 \acs{EME} and will be covered in section \ref{sec:HTML5_EME} in more detail. \\
The reason we look at the example of the WiDevine Android plugin is that it supports different security levels, depending on the use of hardware security mechanisms \cite{googleWidevineDRMArchitecture2017}: For security level 1, ``[a]ll content processing, cryptography, and control is performed within the Trusted Execution
@ -204,10 +203,10 @@ In \cite{livshitsSecurityNativeDRM2015} Livshits et al. analyse current web brow
\caption{Sequence of information exchange between \ac{EME}\label{fig:EMEsequence} components, source \cite{livshitsSecurityNativeDRM2015}}
\end{figure}
First the browser's media stack parses the embedded media file and discovers the \textit{key id} embedded into the media file's metadata. This fires a \textit{needkeys} event including some initialization data to the web application which then creates the \textit{mediaKeys} and \textit{mediaKeySession} for a specific key system. The initialization data is then pushed to a \ac{CDM} implementing the key system. The \ac{CDM} then creates a \textit{keymessage} for a license server, which is then sent to the license server by the browser and of which the response is passed back to the \ac{CDM} to decrypt the received license and update the \textit{mediaKeySession}. \\
First the browser's media stack parses the embedded media file and discovers the \textit{key id} embedded into the media file's metadata. This fires a \textit{needkeys} event including some initialization data to the web application which then creates the \textit{mediaKeys} and \textit{mediaKeySession} for a specific key system. After the initialization data is pushed to a \ac{CDM} implementing the key system, it generates a \textit{keymessage} for a license server. The message is then sent to the license server by the browser, passing the response back to the \ac{CDM} as well which decrypts the received license and updates the \textit{mediaKeySession}. \\
After a \textit{keyadded/keyerror} event fired back to the web application, indicating the success status of the license retrieval, it can finally initiate the playback of the media file, causing the \ac{CDM} to decrypt it using the received license key.
Hereby all javascript events fired from the \ac{CDM} to the web aplication contain byte buffers, which the web application running in browser context passes around and sends them to the respective servers. But these byte buffers are usually encrypted by the \ac{CDM} and thus incomprehensible to the browser handling them. \cite{WhatEMEHsivonen}
Hereby all javascript events fired from the \ac{CDM} to the web application contain byte buffers, which the web application running in browser context passes around and sends them to the respective servers. But these byte buffers are usually encrypted by the \ac{CDM} and thus incomprehensible to the browser handling them. \cite{WhatEMEHsivonen}
One interesting side-note is that due to the usage of the \textit{ISO Common Encryption} standard for encryption of the media files, the same file can be decrypted by different \acp{CDM} implementing different \ac{DRM} schemes. As the license format and content of the encrypted byte buffers are proprietary, differing between various \ac{DRM} schemes, this requires the content provider to operate the respective license servers for each scheme. But having acquired a license, each \ac{CDM} can then decrypt the media file accordingly.
@ -230,7 +229,7 @@ In 2009 Yu et al. have proposed a \ac{DRM} architecture based on \ac{TPM}, calle
The high-level architecture of TBDRM is outlined in Fig. \ref{fig:TBDRM_arch}. The encrypted content itself can be delivered independently from the license, which consists out of a usage policy for the content, its decryption key and some metadata for describing and identifying the matching content. \\
The trust basis for all other client components is provided by the \textit{\acf{AA}}. This component can attest the authenticity and identity of the \textit{\ac{VC}}, which ensures and attests freshness of a license, the \textit{\ac{DC}} handling the actual content decryption and policy enforcement, and the actual media \textit{player} component. \\
When requesting a license, the \ac{LD} decides whether to give a license based on the attested identity of the \ac{DC} and the license version requested. At playback, the trustworthiness of the player component is first attested to the \ac{DC} by the \ac{AA}, the same is done for the \ac{VC} afterwards. If all components are trustworthy to the \ac{DC} and the \ac{VC} also has verified the freshness of the license, the \ac{DC} checks the requested usage permission against the usage policy of the licenses. If access can be granted, the media content is decrypted with the symmetric key provided by the license and passed to the player. If the usage policy mandates a version bump of the license (e. g. for enforcing a playback number limit), this is done by the \ac{VC} after playback.
When requesting a license, the \ac{LD} decides whether to give a license based on the attested identity of the \ac{DC} and the license version requested. At playback, the trustworthiness of the player component is first attested to the \acl{DC} by the \acl{AA}, the same is done for the \acl{VC} afterwards. If all components are trustworthy to the \ac{DC} and the \ac{VC} also has verified the freshness of the license, the \ac{DC} checks the requested usage permission against the usage policy of the licenses. If access can be granted, the media content is decrypted with the symmetric key provided by the license and passed to the player. If the usage policy mandates a version bump of the license (e. g. for enforcing a playback number limit), this is done by the \ac{VC} after playback.
For implementing this architecture, the authors proposed using several \ac{TPM} functionality for ensuring the security (persistent control of content, license integrity/ confidentiality/ freshness) of the DRM system. \\
In their prototype they ensure a \textbf{trust chain} from the boot on to each DRM scheme component by measuring each system boot step, starting at the hardware trust anchor provided by the \ac{TPM}. Measuring means the \ac{TPM} computing the hash of a component and storing the result into its internal platform configuration registers, from where it can be retrieved later. First the \ac{TPM} measures the hardware platform configuration, the bootloader containing a \textit{kernel measuring agent} and the kernel containing an \textit{application measurement agent} serving as the \ac{AA} component. A full trust chain can then be established by checking all component's measurements from the respective \ac{TPM} registers. Additionally, the kernel contains the \ac{VC} component and the part of the \ac{DC} responsible for verifying the trustworthiness of all other components, turning all components of the system stack from the hardware to the kernel itself into parts of the \acl{TCB}. The player itself and the \ac{DC} part responsible for interpreting and deciding on policy rules are running in the user-space and are isolated from each other. The application measurement agent and the user-space \textbf{isolation} are done based on a Linux Security Module running in the kernel, the latter by preventing processes from accessing other processes' address space e. g. using ptrace. \\
@ -268,9 +267,14 @@ This is also apparent in the light of several apps using the Android DRM framewo
The advantage of the approaches where \aclp{CDM} run inside a \ac{TEE} is that, given the \ac{TEE} technology itself is secure, authentication issues like the ones just described can be overcome and the cryptographic primitives used are semantically secure, only the \ac{TEE} running the (proprietary) CDM itself is part of the \ac{TCB}. All other software on the untrusted (virtual) processor, even the operating system, might be modified freely as all sensitive data (plain content, cryptographic keys) are never accessible outside of the secured environment. When using ARM TrustZone though, care needs to be taken of other code running within the secure world. As there is only one secure world available, all programs running within the \ac{TEE} have to share the same environment, on top of a special purpose operating system like the Trusty \ac{OS} \cite{TrustyTEE} used on Android devices. There, processes running on top of the \ac{OS} are isolated from each other via address space separation, utilizing the \ac{MMU}. So at least the \ac{CDM} and the trusted \ac{OS} it is running on are part of the \ac{TCB}. In the case of Trusty, even all processes running on top are bundled with the \ac{OS}, signed, and then verified by the firmware at boot similarly to Secure Boot. And still all vulnerabilities in code belonging to the \ac{TCB} are potentially security critical if exploitable, one example of it shown with a vulnerability in the \ac{TEE} \ac{OS} itself. \cite{shenExploitingTrustzoneAndroid2015}
For the TBDRM approach though the size of the \ac{TCB} is even larger, including the whole kernel and TBDRM's userland components. Firmware and bootloader are part of the trust chain and must not be modified for the DRM system to work, but are not a crucial part of the \ac{TCB} as the kernel running on top is measured by the \ac{TPM} independently. All of this requires a trustworthy \ac{TPM} implementation. \\
For the TBDRM approach though the size of the \ac{TCB} is even larger, including the whole kernel and TBDRM's userland components. Firmware and bootloader are part of the trust chain and must not be modified for the DRM system to work, but are not a crucial part of the \ac{TCB} as the kernel running on top is measured by the \ac{TPM} independently. As the \ac{TPM} provides important the cryptography and trust services to the system, it is part of the \ac{TCB} as well. \\
As measured boot is used, modified software components are not prevented from running but the DRM functionality would not work on systems not identified as trustworthy. Thus a dual-boot system can be imagined, where in one of the systems the kernel can be freely modified while loosing the ability to play DRM secured content, while the unchanged other system retains the ability of DRM-protected playback.
When using UEFI SecureBoot, there needs to be a trusted path from the firmware to the trusted DRM software (e. g. a \ac{CDM}) to be able to verify the authenticity the component. Additionally, appropriate measures need to be taken to ensure that data inside the trusted DRM component can not be read by untrusted processes. This results in a \ac{TCB} comprising the whole operating system and all higher-privilege user land components. Effectively, this results in a lock-down of the whole system.
In practice, Microsoft requires devices \cite{MicrosoftHardwareCertification2014} shipped with the Windows \ac{OS} to have Secure Boot enabled by default, verifying boot images against Microsoft's key. This requirement has been criticized as it might lead to lock-down of consumer hardware and the inability to install alternative operating systems on it. \\
Earlier versions of the requirements mandated an option to disable Secure Boot on x86 devices while also mandating that it must always be enabled on ARM-based devices. \cite{moodyMicrosoftBlockingLinux} At least on x86 devices as there is a shim bootloader signed by Microsoft \cite{matthewgarrettAnnouncingShimReview}, enabling the launch of other unsigned boot images. The signature of this specific binary could though be put on the firmware's deny list through updates.
\subsubsection{media output on peripherals}
The secured rendering pipeline used in \ref{sec:HTML5_EME} only reaches to the GPU. But how to transfer the decoded audio and video data to the actual display device and speakers? \\