iPAS Exam Preparation Notes - Information Security Engineer
While preparing for iPAS-related topics recently, I took the opportunity to organize the concepts involved in the Information Security Engineer (Junior) certification. While working through practice questions generated by Gemini Gem, I found that some terms and distinctions are easily confused, and much of the content is inherently aligned with engineering practice. Therefore, I decided to compile this into a note set focused on understanding and practical application.
The following are just my personal notes; the content does not aim to fit perfectly within any specific question bank. I have deliberately increased the difficulty of the prompts for Gemini Gem, so some sections also include background, tools, or contexts that often appear together in practice.
These notes are a collection of thoughts as they come to me, and I may supplement them at any time while continuing to practice questions. Additionally, there are many code and command sections in the notes, which are included for ease of understanding or as a reference when needed, so I have used details tags to collapse them.
Basic Concepts
Fundamental Principles and Terminology of Information Security
CIA Triad (Core)
| Concept | English | Description | Control Measures |
|---|---|---|---|
| Confidentiality | Confidentiality | Ensure information is accessed only by authorized parties | Encryption, access control, data classification |
| Integrity | Integrity | Ensure information is not modified without authorization | Hash verification, digital signatures, version control |
| Availability | Availability | Ensure authorized users can access information in a timely manner | Backup, disaster recovery, load balancing |
Extended Security Attributes
| Concept | English | Description | Control Measures |
|---|---|---|---|
| Authenticity | Authenticity | Verify the authenticity of the information source | Digital signatures, PKI, multi-factor authentication |
| Non-repudiation | Non-repudiation | Ensure actions cannot be denied after the fact | Digital signatures, audit logs, timestamp services |
| Accountability | Accountability | Ensure actions can be traced to a specific individual | Identity authentication, access logs, separation of duties |
AAA Framework (Authentication, Authorization, Accounting)
| Element | English | Description | Control Measures |
|---|---|---|---|
| Authentication | Authentication | Verify user identity ("Who are you?") | Account passwords, MFA/FIDO2, biometrics |
| Authorization | Authorization | Determine accessible resources ("What can you do?") | RBAC, ABAC, OAuth 2.0 |
| Accounting | Accounting | Record user actions for tracking ("What did you do?") | Audit logs, SIEM, NetFlow |
CIA vs AAA
- CIA describes the security attributes information should possess (protection goals); AAA describes the three stages of access control (implementation mechanisms). The two are complementary.
Asset Valuation Criteria
| Value Type | Evaluation Factors | Quantification Method | Example |
|---|---|---|---|
| Direct Cost | Reconstruction, procurement costs | Monetary value | Software license fees, hardware equipment costs |
| Indirect Cost | Operational disruption, reputational damage | Estimated revenue loss | Loss per hour of system downtime |
| Legal Cost | Regulatory compliance, risk of fines | Potential penalties | GDPR fines up to 4% of annual turnover |
| Competitive Value | Intellectual property, trade secrets | Competitive advantage assessment | R&D results, customer lists |
Defense in Depth
An architectural strategy of multi-layered security controls, ensuring that even if one layer is breached, other layers still provide protection.
| Layer | Control Measures |
|---|---|
| Governance Layer | Policy, Security Awareness Training |
| Physical Security | Access Control, CCTV |
| Network Perimeter | Firewall / IPS (Intrusion Prevention System) / SDP (Software Defined Perimeter) |
| Internal Network | Zero Trust / NAC (Network Access Control) |
| Host Security | EDR (Endpoint Detection and Response) / AppLocker (Application Whitelisting) |
| Application Security | WAF (Web Application Firewall) / RASP (Runtime Application Self-Protection) / SSDLC (Secure Software Development Lifecycle) |
| Data Security | Encryption / DLP (Data Loss Prevention) |
Security Governance Document Hierarchy
| Level | English | Nature | Example |
|---|---|---|---|
| Policy | Policy | Explains "what to do" and "why," without specifying technical details. Approved by top management, mandatory, violation constitutes non-compliance. | "All data transmission must be encrypted" |
| Standard | Standard | Explains the "minimum technical threshold to meet policy requirements," mandatory. Specifies specific versions, algorithms, or settings; violation constitutes non-compliance. | "Use TLS 1.2 or higher" |
| Procedure | Procedure | Explains "how to execute," listing repeatable operational steps. Mandatory, personnel must follow steps; deviation requires formal approval. | "Account application SOP" |
| Guideline | Guideline | Provides recommended practices, non-mandatory. Personnel can judge whether to adopt based on context; deviation does not constitute non-compliance. | "Recommended password length of 12+ characters" |
Information Ethics: PAPA Theory
Proposed by scholar Richard Mason in 1986, defining four ethical issues in the information age.
| Abbreviation | Issue | Core Question |
|---|---|---|
| P (Privacy) | Privacy | Individuals have the right to decide whether to disclose their own information |
| A (Accuracy) | Accuracy | Responsibility for the authenticity and correctness of information |
| P (Property) | Property | Ownership of information intellectual property rights |
| A (Accessibility) | Accessibility | Under what conditions is one qualified to access information |
Difference between Accessibility and Availability
Accessibility ≠ Availability: The former is "who is qualified to use it," the latter is "can the system be used."
Information Asset Classification Standards
| Level | English | Standard | Typical Example |
|---|---|---|---|
| Public | Public | Disclosure causes no harm | Marketing materials on the company website, announced financial reports |
| Internal | Internal | Disclosure causes no major damage, but not actively leaked | Internal company website operational procedure documents |
| Confidential | Confidential | Disclosure may damage the enterprise | Trade secrets, unannounced product R&D plans |
| Private | Private | Disclosure may damage others' privacy | Employee ID numbers, customer credit card numbers, etc. |
Government Data Classification
In Taiwan, the government classifies confidential data from high to low according to the "National Classified Information Protection Act": Absolute Secret → Top Secret → Secret. General official documents are classified as "Secret" or "General" according to the "Document Processing Manual." Similar to the four-level enterprise classification logic, the core difference is that government classification focuses on national security impact, while enterprise classification focuses on commercial damage.
Asset Management Roles
| Role | English | Typical Holder | Responsibility |
|---|---|---|---|
| Asset Owner | Asset Owner | Business Unit Manager | Determines classification level, approves access authorization, bears final security responsibility |
| Asset Custodian | Asset Custodian | IT Department | Implements storage, backup, access control, and other technical measures according to owner instructions; no authority to adjust levels independently |
Key Points for Asset Classification
- Level adjustments (including upgrades and downgrades) must be decided by the Asset Owner according to organizational policy and risk assessment procedures and cannot be changed arbitrarily; downgrades must be documented in writing for audit purposes.
- Personally Identifiable Information (ID card, credit card number) → Private; Trade secrets → Confidential.
Differences from Information Security Roles
Similar to the Owner/Custodian direction in asset management, but the legal framework is different and cannot be directly equated:
- Data Controller vs Asset Owner: The legal responsibility of the controller is defined by GDPR / Personal Data Protection Act, and violations can be penalized by competent authorities; the responsibility of the Asset Owner comes from internal organizational policy.
- Data Processor vs Asset Custodian: A written Data Processing Agreement (DPA) is required between the processor and the controller; the relationship between the custodian and the owner is an internal division of duties and does not require external contracts.
Regulations and Compliance
ISO/IEC 27001 and Basic ISMS Requirements
ISO/IEC 27000 Series Standards
| Standard | Topic | Certifiable? | Key Description |
|---|---|---|---|
| ISO/IEC 27000 | Overview of terms and definitions | ❌ Not certifiable | Defines terms and concepts used throughout the 27000 series; the basic dictionary for reading other standards |
| ISO/IEC 27001 | Information Security Management System (ISMS) Requirements | ✅ Can apply for 3rd party certification | Specifies the requirements that an organization must (SHALL) establish, implement, and maintain for an ISMS; the core of the series certification |
| ISO/IEC 27002 | Information Security Control Measures Guidelines | ❌ Not certifiable | Provides implementation suggestions (SHOULD) for control measures in Annex A of 27001; an operational manual, not a specification |
| ISO/IEC 27003 | ISMS Implementation Guidance | ❌ Not certifiable | Explains how to implement 27001 clauses, providing implementation examples and recommended practices |
| ISO/IEC 27004 | Information Security Measurement and Evaluation | ❌ Not certifiable | Provides design methods for measurement indicators, corresponding to 27001 Clause 9 (Performance Evaluation), assisting organizations in evaluating ISMS effectiveness |
| ISO/IEC 27005 | Information Security Risk Management | ❌ Not certifiable | Provides guidance on the risk management process (Identification → Assessment → Treatment → Monitoring), providing a methodology for 27001 risk assessment |
| ISO/IEC 27006 | Requirements for Certification Bodies | ✅ (Applicable to the certification body itself) | Specifies the conditions that third-party bodies performing ISMS audits and certifications must meet; not applicable to general organizations |
| ISO/IEC 27007 | ISMS Audit Guidance | ❌ Not certifiable | Provides methodology for performing ISMS internal audits and third-party audits, supplementing ISO 19011 for information security audit scenarios |
| ISO/IEC 27017 | Cloud Service Security Controls | Depends on certification body | Additional control guidance for cloud providers and tenants, supplementing 27002 for cloud scenarios |
| ISO/IEC 27018 | Public Cloud PII Protection | Depends on certification body | Guidance for protecting personal data in cloud environments, in line with the spirit of GDPR |
ISO/IEC 27001 and SoA Key Points
- Clauses 4–10 of 27001 are mandatory (SHALL); organizations must meet all of them to obtain certification.
- Passing 27001 certification = ISMS management system meets the standard; 27002 is a reference manual for "how to do it" and is not certifiable itself.
- The 93 control measures in Annex A do not all require implementation; organizations select applicable items based on risk assessment results and record the selection rationale or exclusion explanation in the Statement of Applicability (SoA).
- 27002 provides implementation suggestions (SHOULD) for each control measure; practices can differ from 27002, but this must be explained in the SoA.
- ISO only publishes standards and does not issue individual qualifications; Lead Auditor certificates are issued by personnel certification bodies (such as IRCA, PECB) in accordance with ISO/IEC 17024.
ISMS Core Elements
| Element | English | Description | Specific Requirements |
|---|---|---|---|
| Context | Context | Understand the organizational environment and stakeholder needs | Identify internal/external issues, regulatory requirements, stakeholder expectations |
| Leadership | Leadership | Commitment and participation of senior management | Establish security policy, assign security responsibilities, provide resources |
| Planning | Planning | Risk assessment and goal setting | Perform risk assessment, formulate risk treatment plans, set measurable security goals |
| Support | Support | Provide necessary resources and capabilities | Staffing, education and training, documented procedures, internal communication |
| Operation | Operation | Implement and run the ISMS | Execute risk treatment measures, control measure operation, supplier management |
| Performance Evaluation | Performance Evaluation | Monitoring and measurement | Internal audit, management review, performance indicator monitoring |
| Improvement | Improvement | Continuous improvement mechanism | Non-conformity handling, corrective actions, preventive actions |
ISO 27001 Annex A Control Measure Categories
| Topic | English | Number of Controls | Coverage |
|---|---|---|---|
| Organizational | Organizational | 37 | Security policy, roles and responsibilities, asset management, supplier relationships, incident management |
| People | People | 8 | Pre-employment screening, security awareness training, disciplinary procedures, termination procedures |
| Physical | Physical | 14 | Secure areas, equipment protection, cabling security, media disposal |
| Technological | Technological | 34 | Access control, encryption, network security, secure development, vulnerability management |
- Total of 93 control measures (the 2013 version had 114; after the 2022 revision, 11 were added and others were merged/streamlined).
- New controls include: Threat Intelligence, Cloud Service Security, Data Masking, Monitoring Activities, etc.
ISMS Effectiveness Indicators
| Indicator Type | Example Indicator | Target Value Reference |
|---|---|---|
| Preventive Effect | Security awareness training completion rate, vulnerability patching time | Training completion rate > 95%, high-risk vulnerability patching < 72 hours |
| Detection Capability | Mean Time to Detect (MTTD), false positive rate | MTTD < 24 hours, false positive rate < 5% |
| Response Efficiency | Mean Time to Respond (MTTR), incident resolution rate | MTTR < 4 hours, resolution rate > 98% |
| Compliance Level | Audit finding improvement rate, control measure effectiveness | Major findings 100% improved, control effectiveness > 90% |
ISMS PDCA Cycle
ISO/IEC 27001 adopts the PDCA cycle to ensure continuous improvement of the ISMS:
| Stage | English | Core Task |
|---|---|---|
| Plan | Plan | Establish ISMS policy, goals, and risk assessment processes; formulate risk treatment plans |
| Do | Do | Implement and run ISMS policies, control measures, and procedures |
| Check | Check | Evaluate ISMS performance, perform internal audits and management reviews, report results to management |
| Act | Act | Take corrective and preventive actions based on audit results to promote continuous ISMS improvement |
Audit Types (Three Parties) Comparison Table
| Type | English | Description | Typical Example |
|---|---|---|---|
| 1st Party Audit | 1st Party | Internal audit, organization performs on itself | Internal security audit performed by the company itself |
| 2nd Party Audit | 2nd Party | External provider audit; competent authority audits subordinate institutions | Financial Supervisory Commission audits banks under its jurisdiction; customers audit suppliers |
| 3rd Party Audit | 3rd Party | Performed by an independent verification/certification body, can issue certification certificates | ISO 27001 certification audit |
Judging 2nd Party vs 3rd Party Audits
- Only 3rd party audits can issue external certification certificates (e.g., passing ISO 27001 certification).
- Competent authority audits (e.g., Financial Supervisory Commission checking banks) = 2nd party, not 3rd party.
- Common misconception: It feels like the competent authority is an "external independent third party," but in ISO definitions, external parties with a vested interest (regulatory agencies, customers) are all considered 2nd party.
Third-Party Audit Certification Comparison Table
| Certification / Report | Nature | Audit Scope | Characteristics |
|---|---|---|---|
| SOC 2 Type 1 | 3rd Party Audit Report | Whether system design at a specific point in time meets Trust Services Criteria (TSC) | Point-in-time, only proves design is reasonable, does not verify actual operational effectiveness |
| SOC 2 Type 2 | 3rd Party Audit Report | Whether system operation over a period of time (usually 6–12 months) meets TSC | Period-of-time, verifies control measures are continuously and effectively operating; more persuasive than Type 1 |
| ISO 27001 Certificate | 3rd Party Certification | Whether the ISMS management system meets ISO 27001 standards | Annual surveillance audit + recertification every three years; focuses on "management system" rather than technical control operational details |
| PCI DSS AoC | Attestation of Compliance | Whether the Cardholder Data Environment (CDE) meets PCI DSS requirements | Applicable to organizations handling credit card transactions, requirements vary greatly by level (Level 1 requires QSA on-site audit) |
SOC 2 Type 1 vs Type 2
- Type 1 = Blueprint Review: Control measures are reasonably designed, but it has not been verified whether they are actually being executed.
- Type 2 = Actual Acceptance: During an observation period, control measures are indeed operating effectively.
- Cloud service providers (e.g., AWS, Azure) usually obtain SOC 2 Type 2 for enterprise customers to use in supplier risk assessments.
NIST CSF and NIST SP 800 Series
NIST CSF (Cybersecurity Framework)
A voluntary cybersecurity framework released by the US NIST (National Institute of Standards and Technology), widely used across industries. CSF 2.0 (released in 2024) added the Govern function, emphasizing governance, defining six core functions, and providing a structured method for organizations to assess and improve their security posture.
| Function | English | Description | Corresponding Activities |
|---|---|---|---|
| Govern | Govern (GV) | Establish security governance structure and strategy to ensure security risk management aligns with organizational goals (Added in CSF 2.0) | Security policy formulation, role and responsibility assignment, supply chain risk management |
| Identify | Identify (ID) | Inventory organizational assets, business environment, and risks | Asset management, risk assessment, supply chain identification |
| Protect | Protect (PR) | Implement control measures to protect critical assets | Access control, data security, security awareness training, platform security |
| Detect | Detect (DE) | Timely discovery of security incidents | Continuous monitoring, anomaly analysis, incident detection procedures |
| Respond | Respond (RS) | Take action on confirmed incidents | Incident management, analysis, notification, corrective actions |
| Recover | Recover (RC) | Restore affected services to normal | Recovery plan execution, improvement measures, external communication |
NIST CSF vs ISO 27001
- NIST CSF: Voluntary framework, no certification system, focuses on "what to do" (function-oriented), suitable as a starting point for security maturity assessment.
- ISO 27001: Can apply for 3rd party certification, focuses on "how to manage" (management system-oriented), suitable for organizations that need to prove compliance externally.
- Complementary: Organizations can use NIST CSF to assess the gap between current status and goals, and then use ISO 27001 to establish a certifiable management system.
- CSF 2.0 added the "Govern" function, emphasizing that security governance should be led by senior management, which is consistent with the spirit of ISO 27001 management review.
NIST SP 800 Series
A series of Special Publications released by NIST, covering technical guidance and control specifications for various security topics, primarily for US federal agencies to follow for FISMA compliance, but also widely referenced by non-federal organizations.
| Document | Topic | Key Description |
|---|---|---|
| SP 800-37 | Risk Management Framework (RMF) | Defines a seven-step risk management process (Prepare, Categorize, Select, Implement, Assess, Authorize, Monitor), the main compliance basis for federal agencies |
| SP 800-53 | Security and Privacy Controls | Hundreds of control measures classified into 20 control families (e.g., AC Access Control, IR Incident Response), the implementation basis for SP 800-37 |
| SP 800-61 | Computer Security Incident Handling Guide | (Rev. 3, 2025) Incorporates incident response into the overall risk management context of CSF 2.0; IR activities span all six Functions (Govern, Identify, Protect, Detect, Respond, Recover) |
| SP 800-63 | Digital Identity Guidelines | Specifies Identity Assurance Levels (AAL), Identity Proofing Levels (IAL), and Federation Assurance Levels (FAL) |
| SP 800-88 | Guidelines for Media Sanitization | (Rev. 2, 2025) Divides media sanitization into three levels: Clear / Purge / Destroy, providing a decision framework for choosing sanitization methods based on data sensitivity and device type |
| SP 800-171 | Protecting CUI | Security requirements for non-federal systems processing Controlled Unclassified Information (CUI), commonly used for government supply chain compliance |
| SP 800-207 | Zero Trust Architecture | Defines seven Zero Trust Tenets and the PE/PA/PEP logical architecture components, providing a reference architecture for federal agencies to migrate to Zero Trust |
NIST CSF vs SP 800 Series
- CSF: Defines "what results to achieve" (high-level function-oriented), suitable for assessing and communicating security posture.
- SP 800 Series: Defines "what specific controls and processes to implement" (detailed technical-oriented), suitable for federal compliance and detailed implementation planning.
- Complementary: CSF is the map, SP 800 series is the construction specification for each area.
COBIT Governance Framework
COBIT (Control Objectives for Information and Related Technologies)
An IT governance framework released by ISACA; the current version is COBIT 2019. The core question is "Is IT doing the right things, and is it doing them compliantly?", aimed at management and auditors.
COBIT divides IT activities into two levels:
| Level | English | Description |
|---|---|---|
| Governance | Governance | Set direction, evaluate options, monitor execution; responsible by the Board or senior management |
| Management | Management | Plan, build, run, monitor specific activities; responsible by IT management |
Common use cases: IT audit, SOX compliance (Sarbanes-Oxley Act), and combined with ISO 27001 as a governance reference.
ITIL Service Management Framework
ITIL (Information Technology Infrastructure Library)
An IT service management framework released by PeopleCert (formerly AXELOS); the current version is ITIL 4 (2019). The core question is "Is IT delivering services well and operating stably?", aimed at IT operations and service management teams.
ITIL 4 centers on the Service Value System (SVS), containing 34 management practices, divided into three categories by function:
| Category | Description | Representative Practices |
|---|---|---|
| General Management Practices | Management activities common across the organization | Risk management, information security management, knowledge management |
| Service Management Practices | Management activities specific to IT services | Incident management, problem management, change control, service desk |
| Technical Management Practices | Technology-oriented management activities | Infrastructure management, deployment management |
Intersection with security: Incident Management, Problem Management, and Change Control processes overlap highly with security incident handling and vulnerability patching processes.
COBIT vs ITIL
- COBIT: IT governance framework, answers "Are we doing the right things?", aimed at management and auditors.
- ITIL: IT service management framework, answers "Are we doing services well?", aimed at operations teams.
- The two are parallel, with no hierarchical relationship. In practice, they can be used together: COBIT sets governance goals, ITIL implements service processes.
GDPR and Taiwan Personal Data Protection Act Comparison Table
| Aspect | GDPR (EU) | Taiwan Personal Data Protection Act |
|---|---|---|
| Applicability | Organizations processing PII of EU residents (not limited by geography; Taiwan enterprises serving EU users also apply) | Public and non-public agencies collecting, processing, or using PII within Taiwan |
| PII Definition | Any information that can directly or indirectly identify a natural person | Name, date of birth, ID number, etc., that can directly or indirectly identify an individual |
| Core Principles | Lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality | Legitimate purpose, necessity principle, data subject consent (written consent as a principle) |
| Data Subject Rights | Access, rectification, erasure (right to be forgotten), portability, object to processing, restriction of processing | Inquiry, review, copy, supplement, rectification, stop collection/processing/use, erasure |
| Data Breach Notification | Notify competent authority (DPA) within 72 hours | No unified statutory time limit, notify as soon as possible according to competent authority requirements |
| Maximum Penalty | €20,000,000 or 4% of global annual turnover (whichever is higher) | Non-public agencies: Fine of up to NT$15 million |
| Data Protection Officer (DPO) | Mandatory in specific circumstances | No mandatory requirement |
| Cross-border Transfer | Must ensure the receiving country has an adequate level of protection (e.g., SCCs standard contractual clauses, adequacy decision) | Must be based on a specific purpose and protected by the laws of the receiving country, or obtain data subject consent |
TIP
- Extraterritorial effect of GDPR: The trigger condition is not "where the organization is," but "whether it actively provides services to EU residents or monitors their behavior" (GDPR Art. 3(2)). Practical judgment criteria include: whether a local EU language is provided, whether Euro payments are accepted, whether the EU market is explicitly mentioned in marketing, etc. If the service has no geographic identification mechanism and treats all regions equally, it depends on whether there is an "intent to actively reach EU users," not automatically applicable just because there are EU users.
- The "right to be forgotten" and "data portability" are the most obvious differences between GDPR and Taiwan's Personal Data Protection Act.
- GDPR penalties are far higher than Taiwan's Personal Data Protection Act, so for enterprises with large business scales in the EU, GDPR compliance priority is usually higher.
Taiwan Personal Data Protection Act: Notification Obligation (Articles 8, 9)
The timing of notification when collecting PII depends on the collection method:
| Collection Method | Article | Timing of Notification |
|---|---|---|
| Direct Collection (obtained from the data subject, e.g., filling out a form) | Article 8 | At the time of collection |
| Indirect Collection (obtained from a third party, not the data subject) | Article 9 | Upon first use of the PII |
The notification content is the same for both cases: name of the collecting agency, purpose of collection, PII category, period/region/object/method of use, and rights the data subject can exercise.
Advanced Privacy Concepts
Data Sovereignty: Data is subject to the laws of the country where it is stored. When an organization uses cloud services, if data is stored on servers in other countries, it may be subject to the laws of multiple countries simultaneously (e.g., the US CLOUD Act allows the US government to cross-border access data held by US companies).
DPIA (Data Protection Impact Assessment): Article 35 of the GDPR requires prior impact assessment for high-risk PII processing activities to identify privacy risks and formulate mitigation measures. Although Taiwan's Personal Data Protection Act has no explicit DPIA article, competent authorities encourage organizations to conduct them voluntarily.
Data Minimization and Purpose Limitation
These are two core concepts among the eight GDPR principles most often related to "collecting PII":
| Principle | English | Definition | Common Violation Case |
|---|---|---|---|
| Data Minimization | Data Minimization | Collect only the PII necessary to achieve a specific purpose; do not collect more or excessively | Requiring ID number, occupation, annual income, etc., unrelated to the service when applying for membership |
| Purpose Limitation | Purpose Limitation | PII can only be used for the purpose stated at the time of original collection; it cannot be used for other purposes | Phone numbers collected for "customer service needs" are used for marketing SMS |
- Complementary: Purpose limitation determines "where it can be used," data minimization determines "how much can be collected."
- Violating purpose limitation is usually more serious than violating data minimization, because if already collected data is misused, it is difficult for the data subject to remedy.
Data Processing Roles
| Role | English | Typical Holder | Responsibility |
|---|---|---|---|
| Data Controller | Data Controller | Enterprise or agency collecting PII | Determines the purpose and method of PII processing, bears primary legal responsibility externally |
| Data Processor | Data Processor | Third-party vendor entrusted | Executes PII processing according to controller instructions, must sign a Data Processing Agreement (DPA) |
HIPAA Health Data Protection Regulations
HIPAA (Health Insurance Portability and Accountability Act)
A 1996 US federal regulation that mandates the privacy and security of Protected Health Information (PHI) in the healthcare industry. Applicable objects are divided into two categories:
- Covered Entities: Healthcare providers, health insurance companies, health information exchange organizations.
- Business Associates: Third-party vendors entrusted by covered entities to process PHI (e.g., cloud storage, billing system operators).
| Rule | English | Core Requirement |
|---|---|---|
| Privacy Rule | Privacy Rule | Limits the scope of PHI use and disclosure, requires patient authorization; grants patients the right to access and correct their own data |
| Security Rule | Security Rule | Mandates administrative (policy procedures), physical (equipment access control), and technical (encryption access control) protection measures for electronic PHI (ePHI) |
| Breach Notification Rule | Breach Notification Rule | ePHI breaches must be notified to affected individuals and HHS within 60 days; single incidents involving 500+ people must be notified to local media simultaneously |
💡 Terminology Quick Check
- PHI: Protected Health Information
- ePHI: Electronic Protected Health Information
- HHS: Department of Health and Human Services
- HIPAA vs GDPR: HIPAA only applies to the healthcare industry and is a US regulation; GDPR applies across industries and covers all PII processing within the EU.
Major Frameworks and Regulations Quick Check
| Framework/Regulation | Nature | Focus | Applicable Objects | Mandatory? |
|---|---|---|---|---|
| ISO 27001 | Management System Standard | Information Security Management (ISMS) | Universal (all industries) | Voluntary (can apply for certification) |
| NIST CSF | Security Framework | Security Risk Management (6 Core Functions) | Universal (US government priority) | Voluntary |
| NIST SP 800 Series | Technical Guidance & Control Specs | Specific implementation details for various security topics (including 800-53 controls) | US federal agencies (non-federal can reference) | Mandatory for federal agencies |
| COBIT | Governance Framework | IT Governance and Control Objectives | IT governance layer, auditors | Voluntary (often used as audit benchmark) |
| ITIL | Service Management Framework | IT Service Delivery and Operations | IT operations/service management teams | Voluntary |
| GDPR | Regulation | Personal Data Protection | Organizations processing PII of EU residents | Mandatory |
| Taiwan Personal Data Protection Act | Regulation | Personal Data Protection | Agencies and enterprises collecting/processing PII within Taiwan | Mandatory |
| HIPAA | Regulation | Healthcare Information Protection | US healthcare industry | Mandatory |
| PCI DSS | Industry Standard | Credit Card Transaction Data Security | Organizations handling credit card transactions | Mandatory (card organization requirement) |
- ISO 27001 vs NIST CSF: 27001 has a 3rd party certification system; CSF has no certification, focuses on maturity assessment.
Information Security Responsibility Level Operation Requirements Table (Levels A to E)
Responsibility levels decrease from A to E, with Level A being the strictest and Level E the most lenient.
| Item / Responsibility Level | Level A | Level B | Level C | Level D | Level E |
|---|---|---|---|---|---|
| Full-time Security Personnel | 4+ | 2+ | 1+ | 0 (concurrently held by IT staff) | 0 (concurrently held by general staff) |
| ISMS Verification | Entire agency mandatory 3rd party verification | Core systems must pass 3rd party verification | Core systems must pass 3rd party verification | Self-conducted according to competent authority regulations | Self-conducted according to competent authority regulations |
| SOC (Security Operations Center) Monitoring | Must be built, 24/7 monitoring for the entire agency | Must be built for core systems | May be built based on agency size and risk needs | None | None |
| Vulnerability Scanning | Once a year (entire agency) | Once a year (core systems) | Once a year (core systems) | Once every 2 years | None |
| Penetration Testing | Once every 2 years | Once every 2 years | Once every 2 years | None | None |
| Security Health Check | Once a year | Once a year | Once every 2 years | None | None |
| Security Education & Training (Hours/Year) | Full-time personnel: 12 hours | Full-time personnel: 12 hours | Full-time personnel: 12 hours | Concurrently held IT staff: 3 hours | General concurrently held staff: 3 hours |
TIP
- A vs B: The difference in ISMS verification and SOC monitoring is the scope; Level A covers the entire agency, Level B only targets core systems.
- Level C is the watershed: The minimum threshold for penetration testing and security health checks; neither exists from Level D onwards.
- Level D onwards switches to concurrent system: No full-time security personnel, concurrently held by IT staff, vulnerability scanning reduced to once every 2 years.
- E vs D: Vulnerability scanning is still once every 2 years at Level D, and only completely cancelled at Level E; concurrently held personnel also drop from IT staff to general staff.
Risk Management
Differences between Vulnerability, Threat, and Risk
| Term | English | Description | Analogy |
|---|---|---|---|
| Vulnerability | Vulnerability | A flaw in a system or process that can be exploited | The door lock is broken |
| Threat | Threat | An event or actor that could exploit a vulnerability to cause damage | A thief is active in the area |
| Risk | Risk | The likelihood and impact of a threat exploiting a vulnerability | The probability and loss of being breached |
- Risk = Threat × Vulnerability × Asset Value. If any of the three is missing, the risk is reduced.
Detailed Risk Assessment and Treatment Process
| Stage | Input | Activity | Output |
|---|---|---|---|
| Asset Identification | Organizational structure, business processes | Inventory and classify information assets | Asset inventory, asset value |
| Threat/Vulnerability Identification | Asset inventory, threat intelligence | Identify applicable threats and existing vulnerabilities | Threat list, vulnerability list |
| Risk Analysis | Threats, vulnerabilities, existing controls | Assess risk likelihood and impact | Inherent risk, residual risk |
| Risk Assessment | Risk value, risk appetite | Determine risk acceptability | Risk level, treatment priority |
| Risk Treatment | Unacceptable risk | Select and implement treatment measures | Risk treatment plan, control measures |
Comparison of Risk Analysis Methods
| Aspect | Qualitative Analysis | Semi-Quantitative Analysis | Quantitative Analysis |
|---|---|---|---|
| Output Format | Level description (e.g., High/Medium/Low, Risk Matrix) | Relative score (Likelihood score × Impact score) | Financial value (e.g., ALE = 120k/year) |
| Data Requirement | Expert judgment, questionnaires, interviews | Expert judgment + rating scale | Historical event data, statistical models |
| Analysis Difficulty | Lower | Medium | Higher, requires statistical and financial analysis skills |
| Subjectivity | High (depends on assessor experience) | Medium (level mapping still has subjectivity) | Low (calculated based on data) |
| Typical Method | Risk Matrix (Likelihood × Impact), Delphi Method | Score Matrix (High=5, Medium=3, Low=1 multiplied and sorted) | ALE formula, Monte Carlo Simulation, FAIR model |
| Applicable Scenario | Preliminary screening, quick classification when resources are limited | Lacks historical data but needs cross-project sorting and comparison | When financial decision basis is needed (e.g., ROSI Analysis) |
Combining Qualitative and Quantitative Analysis
In practice, qualitative analysis is often used first to quickly screen high-risk items, and then quantitative analysis is performed on these items to produce financial data. Pure quantitative analysis is uncommon because it is difficult to obtain reliable historical occurrence rate data for many risks.
Risk Matrix
Presents Likelihood and Impact in a two-dimensional matrix; the intersection determines the risk level, assisting in prioritizing treatment. It is a qualitative tool; the output is a level label and does not involve financial values.
| Likelihood ↓ / Impact → | Low | Medium | High |
|---|---|---|---|
| High | Medium | High | High |
| Medium | Low | Medium | High |
| Low | Low | Low | Medium |
Delphi Expert Assessment Method
A structured expert consensus method used for risk assessment scenarios where historical data is lacking and one must rely on expert judgment.
- Send anonymous questionnaires to experts to collect individual opinions.
- Summarize and provide feedback to all experts.
- Experts revise their own judgments based on group feedback.
- Repeat iterations until opinions converge to a consensus.
The anonymous design aims to eliminate bandwagon effects and authority influence, ensuring each expert's judgment is independent.
Score Matrix Method
A semi-quantitative analysis tool that maps qualitative levels to organization-defined values and then multiplies them to produce a risk priority score. A common example is High=5, Medium=3, Low=1, but both the values and the number of levels can be adjusted according to needs; the scale must remain consistent within the same assessment.
Risk Priority = Likelihood Score × Impact Score; the result is only for relative sorting and does not represent the actual loss amount.
Risk Matrix vs Score Matrix
Both structures are the same (Likelihood × Impact), the difference is in the output format:
- Risk Matrix (Qualitative): Outputs level labels (High/Medium/Low), used for quick classification
- Score Matrix (Semi-quantitative): Outputs numerical scores, used for sorting priorities across projects
Risk Quantification Formulas
| Term | Chinese | Description |
|---|---|---|
| ALE (Annualized Loss Expectancy) | Annualized Loss Expectancy | Average expected loss amount from a specific threat within one year, |
| ARO (Annualized Rate of Occurrence) | Annualized Rate of Occurrence | Expected number of times a threat event occurs within one year (e.g., 0.1 = once every 10 years) |
| SLE (Single Loss Expectancy) | Single Loss Expectancy | Expected loss amount each time a threat event occurs, |
| AV (Asset Value) | Asset Value | Monetary value of the asset |
| EF (Exposure Factor) | Exposure Factor | Percentage of asset loss when a threat occurs (0–100%) |
Formula Connection
Example: Server value 1 million (AV), estimated loss of 60% after being encrypted by ransomware (EF = 0.6), estimated occurrence once every 5 years (ARO = 0.2). → SLE = 1 million × 0.6 = 600,000 → ALE = 0.2 × 600,000 = 120,000/year
If the annual cost of protective measures is less than 120,000, it is cost-effective.
Monte Carlo Simulation and ALE
The traditional ALE formula assumes ARO and EF are fixed values, but in reality, these parameters have uncertainty ranges (e.g., "occurs every 3–7 years"). Monte Carlo simulation performs massive random sampling, taking possible values for each variable, executing calculations tens of thousands of times, and producing a probability distribution of ALE rather than a single number. It is the standard method for high-level Quantitative Risk Analysis (QRA).
For example, outputting: "There is a 90% probability that the annualized loss will not exceed 500,000," which is more valuable for decision-making than "expected loss of 200,000."
ROSI (Return on Security Investment)
ROSI measures the financial rationality of security control measures: How much loss can the investment save?
| Term | Description |
|---|---|
| ALE_before | Annualized Loss Expectancy before implementing control measures |
| ALE_after | Annualized Loss Expectancy after implementing control measures (after risk reduction) |
| Annual Cost of Control Measure | Total cost of ownership of the security measure per year (license fees + labor + maintenance) |
Example: Antivirus software annual fee 20,000, expected to reduce ALE from 150,000 to 30,000.
- Loss saved: 150,000 − 30,000 = 120,000
- Net benefit: 120,000 − 20,000 = 100,000
- ROSI = 100,000 / 20,000 × 100% = 500% (every 1 unit invested saves 5 units of loss)
FAIR Model (Factor Analysis of Information Risk)
FAIR is an industry-mainstream quantitative risk analysis framework, an open standard maintained by The Open Group, which decomposes risk into a quantifiable factor tree structure, ultimately producing a probability distribution of loss.
| Term | Description |
|---|---|
| Loss Event Frequency (LEF) | Expected number of times a threat successfully causes loss within a certain time |
| Threat Event Frequency (TEF) | Frequency of threat attempts (regardless of success or failure) |
| Vulnerability | Probability that a threat event converts into a loss event (the stronger the control measure, the lower this value) |
| Loss Magnitude | Impact scale of a single loss event, decomposed into Primary Loss and Secondary Loss (e.g., reputation damage, legal litigation) |
FAIR vs Traditional ALE
- The ALE formula is a "point estimate"; FAIR produces a conclusion of "there is an X% probability that the loss will not exceed Y amount" through factor decomposition and probability distribution, resulting in higher decision quality.
- FAIR and ISO 27005/NIST CSF can be used complementarily: ISO 27005 defines the risk management process, FAIR provides quantitative analysis methods.
- Adoption threshold: Requires collecting fine-grained threat and control measure data, suitable for organizations with higher security maturity.
Risk Treatment Strategy Comparison Table
| Strategy (ISO 27005) | English | Definition | Example | Applicable Scenario |
|---|---|---|---|---|
| Risk Avoidance | Risk Avoidance | Abandon activities or assets that may trigger risk | Stop using insecure legacy protocols, abandon high-risk markets | Risk is too high and cannot be effectively reduced |
| Risk Modification (Reduction) | Risk Modification | Implement control measures to reduce risk likelihood or impact | Deploy firewall, implement MFA, encrypt transmission | Risk is within acceptable range, and control measure cost is reasonable |
| Risk Sharing | Risk Sharing | Share part of the consequences of risk with a third party, usually part of financial impact or contractual liability | Purchase cyber insurance, outsource hosting (MSP/MSSP), SLA agreed compensation mechanism | Risk impact is large, but impact can be dispersed through contracts or insurance |
| Risk Retention (Acceptance) | Risk Retention | Acknowledge risk exists, take no additional measures | Residual risk is below risk appetite, patching cost far exceeds potential loss | Risk is within tolerable range, or control cost is not cost-effective |
ISO 27005 Risk Treatment Process
- Risk Assessment: Identification → Analysis → Evaluation.
- Based on assessment results, select a treatment strategy (Avoidance/Modification/Sharing/Retention) for each risk.
- Formulate a Risk Treatment Plan, recording the selected strategy and implementation schedule.
- Residual Risk must be formally approved for acceptance by management.
Risk Terminology Supplementary Comparison Table
| Term | English | Definition |
|---|---|---|
| Inherent Risk | Inherent Risk | The original risk level before implementing any control measures, reflecting the natural exposure of assets to threats |
| Residual Risk | Residual Risk | The remaining risk after implementing risk treatment measures, must be formally accepted by management |
| Secondary Risk | Secondary Risk | New risks triggered by the risk response measures themselves (e.g., introducing monitoring systems to reduce security incident risk, but simultaneously creating employee privacy concerns) |
| Risk Transfer | Risk Transfer | Transferring specific parts of risk consequences to third parties through insurance, contracts, or compensation clauses |
| Risk Acceptance | Risk Acceptance | Management is aware of and formally accepts the risk, takes no further treatment measures, applicable when residual risk is within risk appetite |
| Risk Capacity | Risk Capacity | The maximum loss limit an organization can bear without jeopardizing survival, an objective financial/operational boundary (different from risk appetite: the latter is subjective willingness, the former is objective capability) |
| Risk Appetite | Risk Appetite | The upper limit of risk an organization strategically chooses to actively bear, a subjective willingness decision, must be less than or equal to risk capacity |
| Risk Threshold | Risk Threshold | The trigger level for a specific risk, requiring immediate response measures when exceeded, usually lower than risk appetite |
Risk Sharing vs Risk Transfer
| Comparison | Risk Sharing | Risk Transfer |
|---|---|---|
| ISO 27005 Term | ✅ Official term | Common colloquialism, not an official ISO 27005 term |
| Definition | Share risk consequences with a third party, usually part of financial loss, service level breach liability, or operational impact | Transfer specific risk consequences to a third party through insurance or contracts, usually emphasizing the transfer of "financial burden" |
| Substantive Difference | Original risk and governance responsibility remain with the organization, only shared by a third party | Semantically stronger, but in practice usually only transfers part of the financial consequences, legal liability does not disappear |
| Example | Outsourcing: Service interruption loss shared by both parties according to SLA or compensation cap | Purchase cyber insurance: Insurance company absorbs part of the claim amount |
The two are often used interchangeably in practice; ISO/IEC 27005 officially adopts Risk Sharing, Risk Transfer is a more common colloquialism. Regardless of the term used, insurance or outsourcing contracts can only transfer part of the financial consequences; compliance obligations and legal liabilities under GDPR, Personal Data Protection Act, etc., do not disappear as a result.
Risk Retention vs Risk Acceptance
| Comparison | Risk Retention | Risk Acceptance |
|---|---|---|
| Nature | Risk treatment strategy (one of four) | Formal approval action by management |
| Source | ISO 27005 treatment option list; ISO 27001 body does not list strategy names | ISO 27001 Clause 6.1.3e directly requires; ISO 27005 explains process details |
| Timing | After assessment, choose not to take additional control measures | After any treatment strategy is executed, formal sign-off on residual risk |
| Relationship | Can stand alone | Must exist after every treatment strategy is executed |
The two can exist simultaneously: Choose "Risk Retention" (no treatment) → Residual risk equals inherent risk → Management executes "Risk Acceptance" for formal approval.
Hierarchical Relationship of Risk Appetite, Risk Capacity, and Risk Threshold
- Risk Capacity: Objective upper limit, exceeding this line will jeopardize organizational survival (e.g., total loss of net assets).
- Risk Appetite: Subjective willingness, how much risk the organization strategically chooses to bear, must be ≤ Risk Capacity.
- Risk Threshold: Trigger level for specific risks, requiring immediate response when exceeded, usually lower than Risk Appetite.
Hierarchy from high to low: Risk Capacity ≥ Risk Appetite ≥ Risk Threshold.
Relationship between Inherent Risk and Residual Risk
- No control measure can reduce risk to zero; residual risk must be formally accepted by management.
- Secondary risk is a link often overlooked in risk management plans: when evaluating response measures, it is necessary to identify whether new risks are introduced simultaneously.
Shadow IT
Definition: Software, cloud services, or hardware devices adopted by employees or teams without approval from the IT department. With the popularity of SaaS tools and the rise of AI services, the scope of Shadow IT has expanded significantly.
Common Types:
| Type | Description | Typical Example |
|---|---|---|
| Shadow SaaS | Employees privately use unapproved SaaS tools | Personal Google Drive, Dropbox, Notion |
| Shadow Cloud | Engineers set up cloud resources on their own using personal or credit card accounts, bypassing IT procurement processes | Self-built AWS account, Azure subscription |
| Shadow AI | Employees input company data into unapproved AI tools | Pasting customer data or source code into ChatGPT, using unvetted AI Coding Assistants |
| Shadow Data | Data is copied to unmanaged storage locations, forming untracked data copies | Exporting databases to personal NAS, forwarding email attachments to personal mailboxes |
| Shadow Hardware | Physical devices not registered by IT connected to the enterprise network | Personal NAS, Raspberry Pi, undeclared routers |
Common Risks:
| Risk | Description |
|---|---|
| Data Breach | Sensitive data enters unmanaged services; supplier terms may allow using data to train models (Shadow AI is particularly evident) |
| Compliance Violation | Data storage locations may violate data sovereignty or GDPR regulations |
| Lack of Patch Management | Software not included in enterprise patching processes may have known vulnerabilities |
| Audit Blind Spot | Unable to track data flow and usage records, making it difficult to trace sources after an incident |
- Countermeasures: CASB to detect unauthorized cloud services, SaaS management platforms, regular cloud usage inventory; provide approved alternatives to reduce employee motivation to bypass (employees often use Shadow IT because official tools are inefficient).
- Intersection of Shadow IT and BYOD: Employees install unapproved work applications on personal devices, touching on both issues simultaneously.
Incident Management
Information Security Incident Classification
| Classification | English | Description | Example | Handling Priority |
|---|---|---|---|---|
| Security Event | Security Event | State changes identified in systems or networks | Firewall logs denied connection | Low (log only) |
| Security Alert | Security Alert | Notifications that may indicate a security event | IDS detects abnormal traffic | Medium (needs analysis) |
| Security Incident | Security Incident | Events confirmed to violate security policies or threats | Malware infection, data breach | High (immediate response) |
| Major Security Incident | Major Incident | Incidents causing significant impact on business operations | Ransomware locking core systems | Highest (crisis management) |
Containment Relationship of the Four
All incidents are events, but not all events are incidents. The four have a nested containment relationship, not a parallel classification:
Event ⊃ Alert ⊃ Incident ⊃ Major Incident
- Event: Broadest scope; any observable system state change is an event, including massive amounts of normal logs.
- Alert: Notifications filtered from events that require attention; still may be a False Positive.
- Incident: Confirmed violation of security policy after analysis, upgraded to an incident, requiring formal response.
- Major Incident: A subset of incidents that impact business operations, requiring the activation of crisis management processes.
Information Security Incident Severity Classification Table (Levels 1 to 4)
The higher the number, the more severe the security incident.
| Item / Severity Level | Level 1 | Level 2 | Level 3 | Level 4 |
|---|---|---|---|---|
| Judgment Criteria (Core Logic) | Non-core system interruption, no PII or confidential data leakage | Non-core system paralysis, or involves general PII leakage | Core system paralysis, data tampered with, or sensitive PII leakage | National security threatened, critical infrastructure large-scale shutdown |
| Notification Time Limit | Within 1 hour | Within 1 hour | Within 1 hour | Within 1 hour |
| Damage Control or Recovery Operations | Within 72 hours | Within 72 hours | Within 36 hours | Within 36 hours |
| Deadline for Submission of Investigation, Handling, and Improvement Report | Within 1 month | Within 1 month | Within 1 month | Within 1 month |
| Typical Example | Official website defacement, general office computer virus infection | Core business briefly interrupted, general PII leakage | Medical record leakage, official document leakage, core system collapse | Power grid/water conservancy paralysis, military or diplomatic secret leakage |
Taiwan Regulatory Time Limit Supplement
According to the current "Regulations on Security Incident Notification, Response, and Drills", all levels of incidents should continue to be investigated and handled after completing damage control or recovery, and an investigation, handling, and improvement report must be submitted within 1 month.
Security Incident Response (NIST SP 800-61)
NIST SP 800-61 Rev. 3 (April 2025) positions incident response as a component of CSF 2.0 risk management, no longer presented as an independent "Incident Handling Manual." IR activities span the six Functions of CSF 2.0, with Govern and Identify as the governance foundation, Protect responsible for preparation and defense, Detect, Respond, and Recover responsible for actual incident handling; continuous improvement spans the entire IR lifecycle, no longer occurring only after the fact.
| Chinese Name | CSF 2.0 Function | Core Task |
|---|---|---|
| Governance | Govern | Establish IR policy, role division, authorization, and resource allocation |
| Identification | Identify | Identify critical assets, assess threat scenarios, establish IR trigger conditions |
| Protection | Protect | Deploy detection tools, establish CSIRT, formulate response plans and drills |
| Detection | Detect | Identify incidents through SIEM, IDS, and log monitoring, determine severity and impact scope |
| Response | Respond | Contain affected systems, remove malicious programs and vulnerabilities, execute communication and coordination |
| Recovery | Recover | Restore systems and verify normal operation, write incident reports, update response plans |
Rev. 2 (2012) Four-Stage Lifecycle
Rev. 2 (withdrawn in April 2025) defined the incident handling lifecycle in four stages:
| Stage | Chinese Name | English Name | Core Task |
|---|---|---|---|
| 1 | Preparation | Preparation | Establish CSIRT, formulate response plans, deploy detection tools, conduct education, training, and drills |
| 2 | Detection & Analysis | Detection & Analysis | Identify incidents through SIEM, IDS, and log monitoring, determine incident severity and impact scope |
| 3 | Containment, Eradication & Recovery | Containment, Eradication & Recovery | Isolate affected systems (containment) → Remove malicious programs and vulnerabilities (eradication) → Restore systems and verify normal operation (recovery) |
| 4 | Post-Incident Activity | Post-Incident Activity | Write incident reports, review improvement measures, update response plans, preserve evidence (for legal or audit purposes) |
💡 Terminology Quick Check
- CSIRT: Computer Security Incident Response Team
- SIEM: Security Information and Event Management
- IDS: Intrusion Detection System
Incident Handling Priority
- Containment is the first priority: Isolate infected systems first to prevent disaster expansion, then proceed with eradication and recovery.
- "Lessons Learned" in post-incident activities is the key to continuous improvement; every incident should produce improvement suggestions fed back to the preparation stage.
Differences between CSIRT, CERT, and SOC
| Organization | Full Name | Positioning | Scope of Responsibility |
|---|---|---|---|
| CSIRT | Computer Security Incident Response Team | Incident response team | Internal security incident detection, analysis, containment, and recovery for the organization; can be permanent or ad-hoc |
| CERT | Computer Emergency Response Team | Emergency response center | National or regional level, provides cross-organizational security incident coordination and early warning (e.g., TWCERT/CC) |
| SOC | Security Operations Center | Security Operations Center | 24/7 continuous monitoring, handles daily alerts and incident classification; CSIRT handles incidents escalated by SOC |
- CERT often acts as a national-level coordination center (e.g., Taiwan's TWCERT/CC, US-CERT), while CSIRT tends to be an internal team for an organization.
- SOC focuses on daily monitoring and initial alert screening, while CSIRT focuses on in-depth incident investigation and response. The two often work together: SOC detection → escalation to CSIRT for handling.
MTTD / MTTA / MTTR Incident Response Indicators
| Indicator | Full Name | Definition | Calculation Method |
|---|---|---|---|
| MTTD | Mean Time To Detect | Average time from when a threat actually occurs to when the organization detects the incident | Detection Time − Incident Occurrence Time |
| MTTA | Mean Time To Acknowledge | Average time from when an alert is triggered to when an analyst confirms taking over | Takeover Time − Alert Trigger Time |
| MTTR | Mean Time To Respond | Average time from when an incident is detected to when the response (containment/eradication/recovery) is completed | Response Completion Time − Detection Time |
Timeline order: Incident Occurrence → MTTD → Detection Confirmation/Alert Trigger → MTTA → Analyst Takeover → MTTR → Resolution
- Shortening MTTD is more critical than shortening MTTR: The longer an attacker lurks inside the organization (Dwell Time), the greater the scope of lateral movement and data theft.
- Industry reference: According to reports, the global average Dwell Time has shortened from hundreds of days to about 10 days, but there is still room for improvement.
- Practices to improve MTTD: SIEM correlation analysis, EDR (Endpoint Detection and Response), Threat Hunting.
- Definition in these notes: MTTR start point is detection time, so MTTR covers MTTA (MTTA is a sub-interval of MTTR).
- Another common definition: Some organizations define the MTTR start point as the analyst takeover time, in which case MTTA and MTTR are non-overlapping continuous intervals, Total Response Time = MTTA + MTTR.
Post-Incident Report
| Element | Description |
|---|---|
| Incident Timeline | Complete record of every time node from detection, notification, containment to recovery |
| Root Cause Analysis | Trace the root cause of the incident, rather than just describing surface symptoms |
| Impact Scope | Affected systems, number of data records, business interruption time |
| Corrective Actions | Patching actions taken (e.g., patching vulnerabilities, blocking accounts, updating rules) |
| Prevention Recommendations | Long-term improvement measures to prevent recurrence of similar incidents |
Cross-Platform Log Management and Forensic Path
| Aspect | Windows | Linux |
|---|---|---|
| System Logs | Event Viewer: Application, Security, System channels | /var/log/syslog (Debian family) or /var/log/messages (RHEL family); systemd uses journalctl |
| Security/Auth Logs | Security event records (Login Success 4624, Login Failure 4625, Account Creation 4720) | /var/log/auth.log (Debian) or /var/log/secure (RHEL) |
| Web Server Logs | IIS: C:\inetpub\logs\LogFiles\ | Apache: /var/log/apache2/; Nginx: /var/log/nginx/ |
| Firewall Logs | Windows Firewall: Event Viewer Windows Firewall With Advanced Security | iptables: /var/log/kern.log; nftables: journalctl -k |
| Centralized Management | Windows Event Forwarding (WEF) | rsyslog / syslog-ng remote forwarding to SIEM |
| Log Retention | GPO sets Event Log size and overwrite policy | logrotate sets rotation and retention days |
| Key Events | Login Failure (4625), Permission Change (4672), Object Access (4663) | /var/log/auth.log, auditd rules |
| Anti-Tampering | Forward to SIEM and set to read-only | Remote Syslog + chattr +a (append-only) |
Windows / Linux Log Query Command Examples
# Windows: Query login failure events (4625) from the last day
Get-WinEvent -FilterHashtable @{
LogName = 'Security'
Id = 4625
StartTime = (Get-Date).AddDays(-1)
} -MaxEvents 20
# Windows: Query Security log directly using wevtutil, newest events first
wevtutil qe Security /c:20 /rd:true /f:text# Linux: View systemd journal from the last hour
sudo journalctl --since "1 hour ago"
# Linux: View logs from the previous boot cycle
sudo journalctl -b -1
# Linux: Set file to append-only to reduce risk of log overwrite
sudo chattr +a /var/log/auth.logGet-WinEvent -FilterHashtableis suitable for precise event filtering byLogName,Id,StartTime.wevtutil qeis suitable for quick Event Log queries or exports.journalctl -b -1is commonly used to compare if anomalies appeared "before this boot."chattr +ais a common practice for ext file systems, meaning the file can only be appended, not overwritten or deleted.
Log Management Key Comparison Table
| Aspect | Key Description |
|---|---|
| Log Purpose | Record user activities and abnormal events as post-incident tracking and legal evidence; protect Non-repudiation |
| Syslog Protocol | Defaults to UDP port 514 (unreliable, packets may be lost during high traffic); switch to TCP to ensure reliable delivery; use TLS (Syslog over TLS) for encrypted transmission |
| Centralized Management | Import logs from multiple devices/systems into SIEM for unified query and analysis |
| Normalization | Log formats (field names, time formats) from different devices are inconsistent; must unify formats before centralized analysis |
| Time Synchronization (NTP) | Device clocks must be synchronized with a standard time source to ensure correct timing of cross-device logs, the foundation for audit and judicial evidence |
| Protection and Retention | Logs must not be arbitrarily modified by administrators; retention periods should meet regulatory or policy requirements |
Syslog Severity Levels
| Level | Value | Keyword | Description |
|---|---|---|---|
| Emergency | 0 | EMERG | System unusable (e.g., kernel panic) |
| Alert | 1 | ALERT | Action must be taken immediately (e.g., primary database unresponsive) |
| Critical | 2 | CRIT | Critical conditions (e.g., hardware failure) |
| Error | 3 | ERR | Error conditions, needs investigation |
| Warning | 4 | WARNING | Warning conditions, not errors but noteworthy anomalies |
| Notice | 5 | NOTICE | Normal but significant conditions |
| Informational | 6 | INFO | Informational messages |
| Debug | 7 | DEBUG | Debug-level messages, usually disabled in production |
Common Log Formats
| Format | Description |
|---|---|
| Syslog (RFC 5424) | Linux/Unix standard log protocol, includes Priority, Timestamp, Hostname, App-Name, Message structure |
| JSON | Structured logs, convenient for parsing and querying by tools like ELK (Elasticsearch + Logstash + Kibana) |
| CEF (Common Event Format) | Defined by HP ArcSight, a semi-structured format widely supported by SIEM, fixed fields, easy cross-system integration |
| LEEF (Log Event Extended Format) | Structured format used by IBM QRadar, similar to CEF but field definitions differ slightly |
| W3C Extended Log Format | Web server log standard, default for IIS |
Easily Confused Concepts
- Retaining logs protects Non-repudiation, not confidentiality or availability.
- Syslog runs on UDP; packets may be lost during high traffic, a design characteristic of UDP without retransmission.
- Log formats differ across devices, requiring Normalization, do not confuse with De-identification.
Log Management Best Practices
- Should record: Login success/failure, permission changes, data access, system errors, management operations.
- Should not record: Passwords (including hash values), credit card numbers, PII (ID numbers, medical records), to avoid logs themselves becoming a source of data leakage.
- Integrity protection: Logs should be Write Once Read Many (WORM) after being sent to prevent attackers from tampering with logs to cover intrusion traces.
- Centralization: Centralize logs to SIEM or log platforms; logs scattered across hosts are difficult to correlate and analyze.
- Retention period: Determined by regulations and policies (e.g., PCI DSS requires retention for at least one year, with the last three months immediately accessible).
- Alert settings: Set real-time alerts for high-risk events (multiple login failures, privileged account operations, access outside business hours).
Digital Forensics: Order of Volatility
When collecting digital evidence during incident response, evidence should be collected from the most easily lost (most volatile) to the most stable. This principle originates from RFC 3227.
| Order | Data Source | Volatility |
|---|---|---|
| 1 | CPU registers, CPU cache | Highest (lost when power is off) |
| 2 | Main memory (RAM) | Extremely high (lost when powered off) |
| 3 | Running processes, network connection status, routing tables | High |
| 4 | Temporary files, paging file (Paging File / Swap) | Medium-High |
| 5 | Hard disk data | Medium (non-volatile, but may be overwritten by malware) |
| 6 | Remote logs, SIEM data | Low |
| 7 | Archival media (tape, backup discs) | Lowest |
- Live Forensics: Capture memory image (Memory Dump) without shutting down, then perform disk image (Disk Image).
- Any operation on the system (e.g., executing tool programs) may change RAM content; assess carefully before evidence collection.
- Write Blocker: Must avoid any writing to the target media during evidence collection. Common practices:
- Hardware Write Blocker: External device, intercepts all write commands at the hardware level, connected between the target disk and the forensic machine; preferred choice.
- Forensic Boot Environment: Boot from USB into Kali Forensic mode or WinPE forensic environment, do not mount local disks to avoid OS contamination.
- Image to External Media: Use
dd, FTK Imager,dcflddto read the source and write the image to external media; the original disk is only read.
Disk Imaging Tool Comparison
| Tool | Platform | Characteristics |
|---|---|---|
| dd | Linux / macOS | Built-in, bit-for-bit copy, no progress display, no built-in hash, syntax errors may overwrite source |
| dcfldd | Linux | Forensic-enhanced version of dd (developed by US DoD Computer Forensics Laboratory), supports calculating hash while copying, progress display, synchronous writing to multiple targets |
| FTK Imager | Windows (GUI) | From AccessData, can output E01 (Expert Witness) or AD1 format, built-in hash verification, supports preview while copying |
Storage Media Space Structure and Hidden Data
After obtaining a disk image, investigators will look for hidden or residual data in the following areas in addition to existing files:
| Area | Description | Forensic Significance |
|---|---|---|
| Slack Space | OS allocates storage in fixed-size clusters. If file size is not an integer multiple of the cluster, the space between the "actual file end" and "cluster end" is Slack Space | May contain residual data (content of the previous file that used the cluster), even if the original file is deleted or overwritten, fragments in Slack Space can still be read |
| Unallocated Space | Sectors in the file system not occupied by any existing files. After a file is deleted, the OS only marks the cluster as "available" and does not immediately clear the data | Data can still be restored by forensic tools (e.g., Autopsy, FTK) before being overwritten by new files, commonly used to recover malicious programs or logs deleted by attackers |
Windows Execution Trace Forensics (Registry and System Files)
Even if the program has been deleted, the following registry keys will retain execution records:
| Location | Description |
|---|---|
| Shimcache (AppCompatCache) | HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\AppCompatCache, records program paths and timestamps that have been executed on the system (updated after reboot) |
| Amcache.hve | C:\Windows\appcompat\Programs\Amcache.hve, records hashes, paths, and first execution time of installed or executed applications |
| Prefetch (.pf) | C:\Windows\Prefetch\, records DLLs loaded when a program is first executed, retains execution count and last 8 execution times |
| MFT (Master File Table) | The core metadata table of NTFS, records attributes of every file (size, timestamp, data location); even if the file is deleted, MFT records can still be read by forensic tools. Records the "current" state of the file, does not retain historical change sequences. |
| $UsnJrnl (Update Sequence Number Journal) | Records the sequence of all file and directory change events (creation, deletion, renaming, data changes, encryption, etc.) within the volume, each record has a USN sequence number and timestamp. Even if the target file has been deleted, log records still exist, making it the best source for tracking ransomware encryption behavior trajectories. |
| $LogFile (NTFS Transaction Log) | Records metadata changes that are about to be or have been completed to restore MFT consistency after a system crash. Content is in units of transactions, can be used in forensics to restore recent MFT changes, but retention time is short (circular overwrite). |
| $Bitmap | Records the allocation status of each cluster (0 = unused, 1 = allocated). Used in forensics to identify the scope of Unallocated Space, confirming which sectors may contain deleted data residuals. |
- Shimcache / Amcache forensic value: Can prove that a specific executable once existed on the system and was executed, even if the attacker has deleted the file afterwards.
- Prefetch is disabled by default on Windows Server (to improve I/O performance), and ransomware often disables Prefetch for anti-forensics.
Linux Digital Forensics Common Paths
/proc/[pid]/ is a virtual directory in Linux procfs that reflects the runtime state of a specific process. Common forensic paths are as follows:
| Path | Content | Forensic Use |
|---|---|---|
/proc/[pid]/maps | Virtual memory layout of the process, including mapped file paths and dynamic link libraries (.so) | Confirm which shared libraries the process loaded, can discover maliciously injected .so |
/proc/[pid]/cmdline | Command line arguments used when starting the process | View process startup parameters, but cannot be read for finished processes |
/proc/[pid]/environ | Environment variables inherited by the process | Find hardcoded keys or settings |
/proc/[pid]/status | Process status (PID, PPID, UID, GID, etc.) | Confirm parent-child relationship and execution identity of the process |
/proc/[pid]/fd/ | All file descriptors currently opened by the process | View files or network connections being read/written by the process |
~/.bash_history | Shell history command records, written upon logout | Restore attacker operation sequences; attackers often clear with history -c or unset HISTFILE |
Chain of Custody
Records the complete custody history of digital evidence from collection to court, ensuring the integrity and legal validity of the evidence. Each transfer or access must be recorded:
| Item | Description |
|---|---|
| Who | Identity of the person obtaining, transferring, or accessing |
| When | Precise timestamp |
| Where | Location or storage position |
| What | Actions performed (e.g., creating disk image, transferring to forensic personnel) |
Once the chain of custody is broken (e.g., evidence was left unattended or access was not recorded), the court may not accept the evidence.
Hash Integrity Verification: When collecting evidence, immediately calculate a hash (usually SHA-256) for both the original media and the image, and record it in the chain of custody document. All subsequent forensic operations are performed on copies, and the hash is re-verified before each start. If the hash matches, it can prove to the court that the copy is bit-for-bit identical to the original media and has not been tampered with.
| Platform | Command |
|---|---|
| Linux | sha256sum disk.img |
| Windows | certutil -hashfile disk.img SHA256 |
Linux Log Search and Forensic Commands
# Use journalctl to search for SSH events in a specific time range
journalctl --since "2025-01-01 00:00:00" --until "2025-01-02 00:00:00" -u sshd
# Search for SSH brute force (login failure records)
grep "Failed password" /var/log/auth.log | tail -20
# Count SSH login failure source IPs (sorted by frequency)
grep "Failed password" /var/log/auth.log \
| awk '{print $(NF-3)}' | sort | uniq -c | sort -rn | head -10
# Memory Dump — Using LiME kernel module
sudo insmod /opt/lime/lime-$(uname -r).ko "path=/evidence/mem.lime format=lime"
# Use dd to create bit-for-bit disk image
sudo dd if=/dev/sda of=/evidence/disk.img bs=4M status=progress
# Calculate image file hash (ensure evidence integrity, maintain Chain of Custody)
sha256sum /evidence/disk.img > /evidence/disk.img.sha256Security Considerations for Log Recording
- Prohibit recording sensitive data: Passwords, credit card numbers, personal ID numbers, etc., must not appear in logs.
- Log integrity protection: Attackers often clear logs after intrusion to cover their tracks; logs should be forwarded in real-time to an independent SIEM or centralized log server.
- Time synchronization: All systems should synchronize clocks via NTP to ensure the timeline of cross-system logs can be correctly correlated.
- Log retention period: Determined by regulatory requirements (e.g., "Cyber Security Management Act" requires retention for at least 6 months) and business needs.
Memory Forensics
Memory (RAM) is the most volatile source of evidence, containing running processes, network connections, decrypted keys, and malicious code; this information is permanently lost after shutdown. Volatility is the most mainstream open-source memory forensics framework, capable of analyzing memory images (Memory Dump) from Windows / Linux / macOS.
Core Analysis Objectives:
| Analysis Object | Volatility Common Commands (v3) | Forensic Significance |
|---|---|---|
| Running processes | windows.pslist / windows.pstree | List all processes and parent-child relationships, find malicious programs disguised as system processes (e.g., fake svchost.exe) |
| Network connections | windows.netstat | List all TCP/UDP connections and corresponding processes, find abnormal C2 communication connections |
| Injected malicious code | windows.malfind | Scan memory segments with "executable + readable/writable (RWX)" protection attributes and no corresponding disk file |
| DLL list | windows.dlllist | List all DLLs loaded by the process, discover abnormally injected DLLs |
| Registry Hive | windows.hivelist | List memory addresses of loaded Registry Hives, can further dump SAM / SYSTEM and other Hives |
VAD (Virtual Address Descriptor) tree structure:
The Windows kernel records the memory segment configuration status and protection attributes (e.g., PAGE_EXECUTE_READWRITE) of each process in a VAD tree. When malicious code injection (Process Injection, Process Hollowing) occurs, a memory segment with RWX (read/write/execute) permissions is often allocated first, and then Shellcode is written; at this time, this memory segment exists in the VAD but has no corresponding physical disk file, which is a very obvious abnormal feature in the VAD tree and the main detection basis for windows.malfind.
Common Methods for Obtaining Memory Images
| Operating System | Tool | Description |
|---|---|---|
| Windows | WinPmem, DumpIt, FTK Imager | Commercial or open-source tools, use kernel drivers to read physical memory and output in raw / lime format |
| Linux | LiME (Linux Memory Extractor) | Kernel module, loaded via insmod to dump memory to local or transmit over the network |
| Virtual Machine | Hypervisor Snapshot | VM snapshot directly contains memory state, easiest to obtain and cleanest during forensics |
Business Continuity and Disaster Recovery
Backup Type Comparison Table
| Backup Type | English | Backup Scope | Backup Time | Restore Steps | Storage Space |
|---|---|---|---|---|---|
| Full Backup | Full Backup | All data, regardless of whether it changed since the last backup | Longest | 1 (Full is enough) | Largest |
| Differential Backup | Differential Backup | All data changed since the last Full Backup | Increasing (larger later) | 2 (Latest Full + Latest Diff) | Medium |
| Incremental Backup | Incremental Backup | Data added or changed since the last any backup | Shortest | Multiple (Latest Full + all subsequent Inc) | Smallest |
Backup Strategy Trade-offs
- Full Backup = Slowest backup, simplest restore.
- Differential Backup = Space is larger than incremental, but restore only needs two copies (Full + Latest Diff).
- Incremental Backup = Fastest backup each time, but restore requires chaining multiple copies, the most complex process.
- Common strategy in practice: Full on Sunday, Incremental (or Differential) Monday through Saturday.
- RPO (Recovery Point Objective): The maximum time range of data loss allowed, determined directly by the backup cycle. The higher the backup frequency, the shorter the RPO; Full backup combined with hourly Transaction Log can compress RPO to within 1 hour.
- 3-2-1 Backup Rule: At least 3 copies, stored on 2 different media, 1 copy off-site. This is the industry-recognized minimum standard.
- WORM (Write Once Read Many) Storage: Data cannot be modified or deleted within the specified retention period after being written, effectively resisting ransomware encryption or attackers deleting backups. Cloud services (e.g., Azure Immutable Blob Storage) support WORM mode, an important part of modern backup strategies.
SQL Server Backup Mapping (Industry practical example):
| General Backup Type | SQL Server Mapping | SQL Server Characteristics |
|---|---|---|
| Full Backup | Full Backup | Backs up the entire database, including data and part of transaction logs, can be restored independently |
| Differential Backup | Differential Backup | Backs up data pages (Data Extent) changed since the last Full, requires Full + Latest Diff for restore |
| Incremental Backup | Transaction Log Backup | Backs up transaction logs since the last log backup, requires Full + all subsequent Logs applied in order for restore |
- SQL Server's Transaction Log Backup concept is equivalent to incremental backup, but records "transactions" rather than "changed files."
- SQL Server also has "Filegroup Backup": Backs up only specific filegroups, suitable for ultra-large databases to avoid full database backups every time.
Cross-Platform Backup Tool Comparison
| Aspect | Windows | Linux |
|---|---|---|
| Built-in Backup Tool | wbadmin (Windows Server Backup), File History | rsync, tar, cp |
| Enterprise Solution | SQL Server Backup, Azure Backup | Bacula, BorgBackup, Restic |
| Snapshot | VSS (Volume Shadow Copy Service) | LVM Snapshot, Btrfs/ZFS Snapshot |
| Scheduling | Task Scheduler | cron / systemd timer |
| Off-site Backup | Azure Blob Storage, AWS S3 | rsync + SSH, rclone (supports multi-cloud) |
Linux Backup Command Examples
# rsync incremental backup (only transfers changed files, suitable for off-site backup)
rsync -avz --delete /var/www/ user@backup-server:/backup/www/
# tar full backup (compresses entire directory into a single file)
tar -czf /backup/full-$(date +%Y%m%d).tar.gz /var/www/
# Schedule daily incremental backup at 2 AM
# crontab -e
# 0 2 * * * rsync -avz --delete /var/www/ user@backup-server:/backup/www/ >> /var/log/backup.log 2>&1
# find to clean up backup files older than 30 days
find /backup/ -name "full-*.tar.gz" -mtime +30 -deleteWindows Backup Command Examples
# wbadmin execute full backup to network share
wbadmin start backup -backupTarget:\\BackupServer\Share -include:C: -quiet
# Use robocopy to mirror sync folder (/MIR will delete extra files in destination)
robocopy C:\Data \\BackupServer\Data /MIR /Z /LOG:C:\Logs\backup.log
# Create VSS snapshot (Volume Shadow Copy)
vssadmin create shadow /for=C:RAID (Redundant Array of Independent Disks) Comparison Table
| RAID Level | Technical Principle | Min Disks | Allowed Faults | Read Performance | Write Performance | Available Space | Applicable Scenario |
|---|---|---|---|---|---|---|---|
| RAID 0 | Striping, no redundancy. Data is split and written alternately to each disk, all disk space is merged into a single logical disk | 2 | 0 (any disk failure = total data loss) | Highest (parallel read) | Highest | 100% (sum of all disk capacities) | Video editing scratch space, scenarios needing high performance but not fault tolerance |
| RAID 1 | Mirroring, data is completely copied to each disk | 2 | n-1 (as long as 1 remains) | High (dual-track read) | No improvement (must write two copies simultaneously) | 50% | OS disk, scenarios valuing data security |
| RAID 5 | Striping + Distributed Parity, parity is rotated across different disks | 3 | 1 (parity can rebuild one disk) | High | Decreased (needs Parity calculation) | (n-1)/n | NAS (Network Attached Storage), common choice balancing performance and fault tolerance |
RAID 5 Parity Distribution Illustration (3 disks)
| Stripe | Disk 0 | Disk 1 | Disk 2 |
|---|---|---|---|
| 1 | A1 | A2 | P(A) |
| 2 | B1 | P(B) | B2 |
| 3 | P(C) | C1 | C2 |
P(X) = Parity block for that stripe (calculated via XOR). Parity is rotated across different disks for each stripe to avoid parity becoming a bottleneck on a single disk. When any disk fails, the lost block can be calculated back using remaining data + parity (e.g., Disk 1 fails → A2 = A1 XOR P(A)).
Common RAID Misconceptions and Practical Configurations
- RAID is not backup: RAID only protects against hard disk failure and cannot prevent ransomware encryption, accidental deletion, fire, etc.
- Common industry combinations:
- RAID 10 (1+0) = RAID 1 Mirroring + RAID 0 Striping. Balances performance and security, a common choice for database servers (e.g., SQL Server recommends RAID 10 for data files).
- RAID 50 (5+0) = RAID 5 + RAID 0. Commonly used in large storage systems.
- Enterprise-grade storage usually comes with Hot Spare disks, which rebuild automatically upon failure.
RAID 5 Intuitive Analogy
Imagine 3 notebooks to copy a story:
- Notebook A writes paragraph 1, paragraph 2.
- Notebook B writes paragraph 1, paragraph 3.
- Notebook C writes paragraph 2, paragraph 3.
Each notebook rotates responsibility for writing a "summary page" (parity), recording the checksum of the other two. If any one is lost, the content can be restored from the content and summary page of the other two.
- RAID 6 (Dual Parity): Equivalent to writing two summary pages per stripe, distributed on two different disks, so it can tolerate 2 disk failures simultaneously. Requires at least 4 disks.
- RAID 5 writes only one parity, tolerating 1 failure; RAID 6 writes two parities, tolerating 2 failures, but write performance is lower (needs two Parity calculations).
Recovery Site Type Comparison Table (Hot / Warm / Cold)
| Type | English | Equipment Status | Data Timeliness | Activation Time | Cost |
|---|---|---|---|---|---|
| Hot Site | Hot Site | Complete equipment identical to main site, continuously operating | Real-time sync (or extremely short delay) | Within minutes (almost immediate switch) | Highest |
| Warm Site | Warm Site | Equipment ready but standby, requires manual startup and data sync | Periodic backup (hours to a day) | Hours to days | Medium |
| Cold Site | Cold Site | Only site, power, and basic infrastructure, no equipment | Requires restore from backup media | Days to weeks | Lowest |
RTO / WRT / MTPD / RPO
- RTO (Recovery Time Objective): The maximum time service interruption is allowed after a system shutdown. Hot Site RTO is shortest; Cold Site RTO is longest.
- WRT (Work Recovery Time): The additional time required to verify data integrity, tune settings, and restore normal operations after the system is restored (RTO achieved). The actual interruption time felt by users = RTO + WRT.
- MTPD (Maximum Tolerable Period of Disruption): The absolute upper limit of interruption time the organization can bear. Exceeding this time will cause unacceptable business damage. MTPD ≥ RTO + WRT.
- RPO (Recovery Point Objective): The maximum time range of data loss allowed after a disaster, determined by the backup cycle. Hot Site (real-time sync) RPO is close to 0; Cold Site (periodic backup) RPO could be days ago.
- Risk Appetite drives goal setting: High risk appetite (can accept longer interruption) → Allows longer RTO/RPO → Can choose lower-cost backup solutions.
- RPO looks at the "past" (how much data can be lost), RTO + WRT looks at the "future" (how fast to recover), MTPD is the hard deadline.
- Industry Practice: Core systems in the financial industry usually require RTO < 4 hours, RPO < 1 hour, so they mostly adopt Hot Site + real-time data sync (e.g., SQL Server Always On Availability Group).
- General enterprise ERP systems can accept RTO < 24 hours, Warm Site combined with daily differential backup is sufficient.
Data Storage Tiering (Hot / Warm / Cold)
Allocate data to different cost storage tiers based on access frequency to achieve a balance between performance and cost.
| Tier | Characteristics | Azure Blob Mapping | Typical Data |
|---|---|---|---|
| Hot Storage | High-frequency access, low latency, high storage cost | Hot tier | Recent transaction records, current month reports |
| Warm Storage | Low-frequency access, lower storage cost, retrieval fee applies | Cool tier | Historical data from the past 1–3 years |
| Cold Storage | Archival, retrieval takes hours, lowest cost | Archive tier | Old data retained for regulatory compliance, archives 3+ years old |
Data can be automatically tiered over time (e.g., Azure Blob Lifecycle Management) or moved manually.
Relational Database Tiering Implementation
Relational databases (e.g., SQL Server) do not have native tiering management, usually simulated in the following ways:
- Partition + Filegroup: Recent partitions go to SSD filegroups (Hot), old data partitions go to HDD filegroups (Cold), data is moved via Partition Switching after aging, no need for row-by-row copying.
- Archive Database: Batch move data exceeding the retention period to an independent historical database, keep the main database lean, historical database is for query only.
Elasticsearch has native ILM (Index Lifecycle Management): Indices can be set to automatically move from Hot nodes (SSD, responsible for write and real-time query) to Warm nodes (HDD, read-only) and then to Cold/Frozen nodes, executed automatically by the engine without manual intervention.
Data Storage Tiering vs Recovery Site: Same Name, Different Meaning
"Hot/Warm/Cold" in recovery sites refers to backup readiness (whether equipment is ready, how fast to switch); in data storage tiering, it refers to the trade-off between access frequency and cost. The two sets of terms have the same name but independent concepts.
BCP Test Type Comparison Table
| Test Type | English | Description | Risk | Cost |
|---|---|---|---|---|
| Tabletop Exercise | Tabletop Exercise | Simulate disaster scenarios through meeting discussions, step-by-step review of plan steps | None (discussion only) | Lowest |
| Structured Walk-Through | Structured Walk-Through | Representatives from various departments jointly review plan content item by item, confirming roles and procedures | None | Low |
| Simulation Test | Simulation Test | Simulate real disaster scenarios (e.g., simulate server room fire), execute notification and response processes, but do not actually switch systems | Low | Medium |
| Parallel Test | Parallel Test | Backup system actually starts and processes business, runs in parallel with the main system to compare results | Low (main system not interrupted) | Medium-High |
| Full Interruption Test | Full Interruption Test | Main system completely shut down, all business switched to backup site | Highest (if backup fails, business is impacted) | Highest |
TIP
- Parallel Test: Backup system actually processes transactions while the main system is still online, verifying backup capability without affecting formal business.
- Full Interruption Test is the only way to verify "true switching capability," but the risk is highest, usually executed only after management approval.
- Test frequency recommendation: Tabletop exercise once every six months, parallel/interruption test at least once a year.
Relationship between BCP and DRP
BCP (Business Continuity Plan) covers the complete strategy for an organization to maintain critical business operations during a disaster; DRP (Disaster Recovery Plan) is a subset of BCP, focusing on the technical recovery of IT systems and data. Both are formulated based on BIA analysis results.
| Aspect | BCP | DRP |
|---|---|---|
| Scope | Entire organization (business processes, personnel, communication, IT) | IT infrastructure (servers, network, data) |
| Goal | Maintain minimum business operations during a disaster | Restore IT systems to normal operating state |
| Key Input | BIA (Business Impact Analysis) | RTO / RPO goals from BIA, critical system list, backup architecture |
| Responsible Role | Senior management, business owners | IT department, security team |
BIA (Business Impact Analysis) is executed before BCP/DRP formulation and is the common foundation for both. By analyzing the impact of interruption on various business processes, critical business functions are identified, and RTO, RPO, and backup priorities are set accordingly, serving as input for BCP strategy planning and DRP technical design.
Physical Security
Physical Security Control Measures Comparison Table
| Control Measure | English | Description |
|---|---|---|
| Tailgating | Tailgating | Unauthorized personnel closely follow authorized personnel through access control without independent card verification |
| Piggybacking | Piggybacking | Authorized personnel knowingly and actively bring in unauthorized personnel (e.g., "helping open the door") |
| Mantrap | Mantrap / Airlock | Double-door interlocking design, the second door can only be opened after the first door is closed, forcing individual verification |
| CCTV | CCTV | Closed-circuit television monitoring, providing real-time monitoring and post-incident review capabilities |
| Visitor Management | Visitor Management | Visitor registration, wearing ID badges, full-process escort, badge retrieval upon departure |
| Environmental Control | Environmental Controls | Temperature and humidity monitoring, fire suppression systems (FM-200 gas fire suppression), UPS, backup generators |
| Cable Management | Cable Management | Network line labeling and physical isolation to prevent unauthorized connection or eavesdropping |
Tailgating vs Piggybacking
- Tailgating: The person being followed is unaware, a passive security vulnerability.
- Piggybacking: The person being followed is aware and actively cooperates, a behavior violating security policy.
- Countermeasure: Mantrap can prevent both simultaneously because everyone must be verified independently.
- Common defense combination for server rooms: Access card + Mantrap + CCTV + Visitor registration system.
Environmental Control and Fire Suppression Systems
| Control Item | English | Description |
|---|---|---|
| HVAC | HVAC (Heating, Ventilation, and Air Conditioning) | Server rooms must maintain 18–27°C and 40–60% relative humidity. Overheating leads to equipment failure, excessive humidity leads to condensation damaging circuits. |
| Wet Pipe | Wet Pipe | Pipes are pre-filled with water, spraying immediately upon fire detection. Fastest reaction but damages electronic equipment, not suitable for server rooms. |
| Dry Pipe | Dry Pipe | Pipes are filled with compressed air, injecting water only when triggered. Suitable for low-temperature environments (anti-freeze), still sprays water, not suitable for precision equipment areas. |
| Pre-Action | Pre-Action | Combines smoke and temperature dual detection, injecting water only when both are triggered. Reduces risk of accidental spraying, suitable for areas surrounding data centers. |
| Clean Agent | Clean Agent | Uses inert gas or chemical gas (e.g., FM-200, Novec 1230, CO₂) instead of water. Non-conductive, leaves no residue, first choice for server rooms. CO₂ has suffocation risk, must be paired with personnel evacuation alarms. |
Fire Suppression System Selection Basis
- Office: Wet Pipe, lowest cost, fastest reaction.
- Server Room Perimeter (Occupied Areas): Pre-Action, dual confirmation reduces accidental spraying.
- Server Room Core (Server Area): Clean Agent (FM-200 / Novec 1230), does not damage equipment.
- Unoccupied Enclosed Spaces: CO₂, strongest fire suppression effect but displaces oxygen, personnel must evacuate first.
TEMPEST (Telecommunications Electronics Material Protected from Emanating Spurious Transmissions)
TEMPEST is the electromagnetic leakage protection standard defined by the US NSA. Electronic equipment generates electromagnetic radiation (Emanation) during operation; attackers can capture signals from a distance using high-sensitivity receivers to reconstruct screen images or keyboard input.
| Protection Measure | Description |
|---|---|
| Faraday Cage | Enclose space with conductive materials (metal mesh, metal plates) to block electromagnetic waves from entering or leaving. Server room level requires integration of walls, floors, and ceilings, and all incoming/outgoing lines must pass through filters. |
| TEMPEST Certified Equipment | Equipment itself reduces electromagnetic radiation from the design stage, meeting NSA certification standards (e.g., NSTISSAM TEMPEST/1-92). |
| Distance Control | Limit the distance between sensitive equipment and building exterior walls (Zone control), utilizing the characteristic that electromagnetic signals attenuate with distance. |
| White Noise Generator | Emit random electromagnetic signals to cover equipment radiation, interfering with the attacker's signal capture. |
TIP
- TEMPEST guards against Passive Eavesdropping; attackers do not need to contact the target equipment.
- Daily applications of Faraday cages: microwave oven shells, mobile phone signal shielding bags, MRI rooms.
- Paired with cable management: Fiber optics do not radiate electromagnetic waves and are the preferred transmission medium for high-security environments.
Cable Management Practices
| Item | Description |
|---|---|
| Structured Cabling | Follow TIA/EIA-568 standard, distinguish between Horizontal Cabling and Backbone Cabling. |
| Labeling System | Label both ends of every line with a unique identifier, corresponding to the Patch Panel Diagram, ensuring traceability. |
| Physical Isolation | Separate power lines and data lines in different cable trays to avoid EMI (Electromagnetic Interference). Manage copper cables and fiber optics separately. |
| Locked Wiring Closet | Wiring closets (IDF, Intermediate Distribution Frame) should be locked and controlled to prevent unauthorized connection or eavesdropping. |
| Fiber Optic Anti-Eavesdropping | Fiber optics do not radiate electromagnetic waves; eavesdropping requires physical bending of the fiber (Fiber Tapping), which can be detected by optical power meters detecting abnormal attenuation. |
Media Sanitization
When devices are scrapped or replaced, formatting or deleting files cannot completely clear data; corresponding destruction methods must be chosen according to the media type:
| Method | HDD (Traditional Magnetic Hard Disk) | SSD / Flash Storage |
|---|---|---|
| Degaussing | ✅ Effective (destroys magnetic records) | ❌ Ineffective (Flash does not rely on magnetism, degaussing cannot clear data) |
| Overwrite / Wipe | ✅ Effective (still a common Clear method for HDD, but should be executed according to current standards and organizational procedures) | ⚠️ Partially effective (wear leveling may retain old data blocks, requires support from manufacturer's secure sanitization commands) |
| ATA Secure Erase | ✅ Effective | ✅ Effective (if supported by manufacturer, controller ensures complete clearing) |
| Cryptographic Erase | ✅ Effective | ✅ Effective (encrypt all data first, then destroy the key, data becomes unreadable, recommended method for SSD) |
| Physical Destruction | ✅ Most thorough | ✅ Most thorough (NSA requires grinding to ≤ 2mm fragments) |
NIST SP 800-88 Rev. 2 "Guidelines for Media Sanitization" (September 2025) divides media sanitization into three levels:
| Level | Description | Applicable Scenario |
|---|---|---|
| Clear | Logical sanitization (overwrite), no special equipment required | General confidential data, reuse within the organization |
| Purge | Degaussing, ATA Secure Erase, NVMe Sanitize, or compliant cryptographic erase; can resist laboratory-level recovery attacks | Sensitive data, device moves outside organizational control |
| Destroy | Physical destruction, ensuring data is completely unrecoverable | Top secret level, media must be completely scrapped |
- Conditions for Cryptographic Erase to reach Purge level: Encryption enabled from device configuration, uses NIST-approved algorithms like AES-256, key destruction is verifiable; if any condition is not met, it only reaches Clear level.
Common Misconceptions
- SSDs must not rely on degaussing; cryptographic erase or physical destruction should be prioritized.
- "Formatting" ≠ "Sanitization": Formatting only deletes file indices, the data itself remains residual and can be restored by forensic tools.
SP 800-88 Rev. 1 (2014) Background
- Rev. 1 directly listed technical operation details (e.g., "HDD overwrite with all zeros at least once"), which have been widely cited for years.
- Rev. 2 shifts from an "operation manual" to a "framework for establishing organizational-level sanitization programs," with most technical details now referencing IEEE 2883, NSA specifications, or organization-approved standards.
- Rev. 2 adds NVMe-specific guidance and explicitly defines the conditions for cryptographic erase to reach the Purge level (resolving the ambiguity of Rev. 1).
Network Security
Network Attack Four Basic Types Comparison Table
Corresponding to the CIA triad, network attacks can be divided into four basic types:
| Type | Damaged Attribute | Description | Typical Technique |
|---|---|---|---|
| Interruption | Availability | Data is cut off during transmission and cannot reach the destination | DDoS, cutting physical lines |
| Interception | Confidentiality | Intercepted and eavesdropped during transmission, content leaked but transmission is unaffected | Packet Sniffing, Man-in-the-Middle eavesdropping |
| Modification | Integrity | Data is modified by an unauthorized party before being sent, the receiver is unaware | MITM packet tampering, SQL Injection |
| Fabrication | Authenticity | Forging identity or data, pretending to be someone else to send | IP Spoofing, phishing emails |
TIP
- Interruption → You don't receive it (Availability destroyed).
- Interception → You received it, but the content was seen (Confidentiality destroyed).
- Modification → You received it, but the content was changed (Integrity destroyed).
- Fabrication → You received it, but the sender is fake (Authenticity destroyed).
Network Architecture and Protocol Comparison Table
OSI 7-Layer Responsibilities
| OSI Layer | English | PDU | Responsibility | Common Protocols & Technologies |
|---|---|---|---|---|
| L7 Application | Application | Data | Faces users or applications directly, provides network service interfaces (e.g., browsing web, transferring files). | HTTP/HTTPS, FTP, DNS, DHCP, SSH |
| L6 Presentation | Presentation | Data | Responsible for data format conversion, encryption/decryption, compression, ensuring different systems can understand data formats. | SSL/TLS, JPEG, Base64 |
| L5 Session | Session | Data | Establishes, manages, and terminates sessions between two ends, handles connection synchronization and dialogue control. | NetBIOS, RPC |
| L4 Transport | Transport | Segment/Datagram | Provides reliable or fast end-to-end (Port to Port) transmission, responsible for flow control and error recovery. | TCP (reliable), UDP (low latency) |
| L3 Network | Network | Packet | Responsible for cross-network logical addressing (IP address) and routing, determines the best path from source to destination. | IP (IPv4/v6), ICMP, BGP, OSPF |
| L2 Data Link | Data Link | Frame | Responsible for intra-network physical addressing (MAC address), frame encapsulation, and error detection, controls data transmission between nodes. | Ethernet (MAC), ARP, VLAN, PPP |
| L1 Physical | Physical | Bit | Defines electrical, optical, or wireless signal specifications for physical transmission media, responsible for sending/receiving raw bitstreams. | 1000BASE-T, Wi-Fi RF, fiber, coaxial cable |
TCP/IP Model vs OSI Comparison
The TCP/IP model has two versions: the "Classic 4-Layer" and the "Modern Practical 5-Layer." The one defined by the US Department of Defense (DoD) in the early days (RFC 1122) is 4 layers. With the development of network hardware technology, mainstream textbooks and network management certifications (e.g., Cisco CCNA) generally adopt the 5-layer model (also called the hybrid model).
| OSI Layer | TCP/IP 4-Layer (Classic DoD) | TCP/IP 5-Layer (Modern Practical) |
|---|---|---|
| L7 App / L6 Pres / L5 Sess | Application | Application |
| L4 Transport | Transport | Transport |
| L3 Network | Internet | Network |
| L2 Data Link | Network Access | Data Link |
| L1 Physical | ⬆️ (Merged into Network Access) | Physical |
TIP
- Reason for merging: L5~L7 merged because modern applications handle session, encryption, and business logic; L1~L2 merged because physical NICs and drivers usually operate bound together.
- PDU and debugging correspondence: Find Port number → Check L4 (Wireshark); Find IP address → Check L3 (Ping, routing table); Find MAC address → Check L2 (Switch, VLAN).
- TCP protocol vs TCP/IP model: TCP is a single transport protocol at L4; TCP/IP is the collective name for the entire L1~L7 internet communication architecture, even if UDP is used at the bottom, it still runs within the TCP/IP model framework.
- For the complete encapsulation/decapsulation process of PDU at each layer, see the PDU Comparison Table.
💡 Protocol Abbreviation Quick Check
Protocol Number is located in the L3 IPv4 header, identifying the upper-layer protocol used by the Payload (different from the Port number at L4, which identifies the application).
| Protocol Number | Protocol | Description |
|---|---|---|
| 1 | ICMP | Network diagnostics and error reporting (used by ping) |
| 6 | TCP | Reliable transmission, has handshake and retransmission mechanisms |
| 17 | UDP | Low-latency transmission, no connection guarantee |
| 47 | GRE | VPN tunnel encapsulation protocol |
| 50 | ESP | IPsec encrypted packet (common in VPN) |
| 51 | AH | IPsec authentication header, verifies only, does not encrypt |
| 89 | OSPF | Internal dynamic routing protocol |
Other Protocol Abbreviations
| Abbreviation | Full Name | Chinese |
|---|---|---|
| ARP | Address Resolution Protocol | Address Resolution Protocol |
| BGP | Border Gateway Protocol | Border Gateway Protocol |
| 802.1Q | — | VLAN encapsulation standard, adds VLAN Tag to Ethernet frame (VLAN is a technical concept, 802.1Q is the implementation protocol) |
| PPP | Point-to-Point Protocol | Point-to-Point Protocol, used for identity verification and line establishment for point-to-point connections, modern broadband (PPPoE) still uses its variants |
| RPC | Remote Procedure Call | Remote Procedure Call, allows programs to call functions on remote servers as if calling local functions, gRPC is the modern mainstream implementation |
| NetBIOS | Network Basic Input/Output System | Network Basic Input/Output System, underlying protocol for early Windows Network Neighborhood, should be disabled in modern environments |
NetBIOS Security Risks
NetBIOS is an extremely outdated protocol; it is strongly recommended to disable it completely in modern enterprise networks because:
- Generates massive broadcast traffic in the internal network, consuming bandwidth.
- Attackers can use NBT-NS Poisoning to spread laterally in the enterprise internal network, a common infiltration technique for ransomware.
- Modern Windows file sharing has switched to the more secure SMB (Port 445) and no longer requires NetBIOS.
PDU (Protocol Data Unit) Comparison Table
| OSI Layer | PDU | English | Description |
|---|---|---|---|
| L5-L7 Application | Data | Data / Message | Raw data generated by the application, not yet added with any protocol header. |
| L4 Transport | Segment / Datagram | Segment (TCP) / Datagram (UDP) | TCP cuts data into segments and numbers them to ensure reliable transmission and order; UDP does not guarantee order and does not retransmit. |
| L3 Network | Packet | Packet | Adds source/destination IP address, determines routing path. |
| L2 Data Link | Frame | Frame | Adds source/destination MAC address and FCS (Frame Check Sequence) error check code. |
| L1 Physical | Bit | Bit | Electrical signals, optical signals, or radio waves. |
Encapsulation and Decapsulation
Imagine the process of mailing a package: you put the document (Data) into an envelope, write the recipient Port (L4), put it in an outer box with an address (L3 + IP), and finally hand it to the courier (L2 converted to frame, L1 converted to electrical signal). Each extra layer is called "Encapsulation"; the recipient side tears open each layer in order, called "Decapsulation."
Encapsulation order: Data → Segment / Datagram → Packet → Frame → Bit
Segment is the packaging unit for TCP, Datagram for UDP.
TCP and UDP
TCP and UDP are both L4 transport layer protocols, determining "how to send" data, regardless of "where to send" (that is the job of L3 IP).
| Aspect | TCP | UDP |
|---|---|---|
| Full Name | Transmission Control Protocol | User Datagram Protocol |
| Connection Mode | Connection-oriented (establishes connection before sending data) | Connectionless (sends directly) |
| Reliability | Guaranteed delivery: sequence numbers, acknowledgment responses, timeout retransmission | Not guaranteed: packets may be lost or out of order |
| Speed | Slower (handshake and acknowledgment have extra overhead) | Faster (no handshake and acknowledgment) |
| Common Applications | HTTP/HTTPS, SSH, SMTP, FTP | DNS query, VoIP, streaming media, QUIC |
TCP Three-way Handshake
TCP must complete three steps before sending data to confirm both sides can send and receive:
Common TCP Flags
| Flag | Description |
|---|---|
| SYN | Request to establish connection |
| ACK | Acknowledgment received |
| FIN | Request to terminate connection normally |
| RST | Force reset connection (abnormal interruption) |
| PSH | Request immediate delivery to application layer, do not wait for buffer to fill |
Security Relevance
- SYN Flood: Attacker sends massive SYN requests but deliberately does not complete the handshake, exhausting the server's half-open connection table, a DoS attack.
- TCP RST Injection: Attacker forges RST packets to force-terminate legitimate connections, can be used to interfere with communication or censorship (commonly used by firewalls).
- UDP Amplification: UDP's connectionless nature allows attackers to forge source IPs, using services with responses much larger than requests (e.g., DNS, NTP) to amplify attack traffic, see the L3-L7 Denial of Service Attack section.
Common Port Comparison Table
The Port number is located in the L4 TCP/UDP header and is used to identify which application the packet should be handed to. For example, when a browser connects to a server, the server receives the request on Port 80 or 443, letting the operating system know to hand the traffic to the web service rather than other programs.
Port numbers are divided into three categories based on range:
| Range | English Name | Chinese Name | Description |
|---|---|---|---|
| 0–1023 | Well-Known Ports | Well-Known Ports | IANA officially assigns to common services; binding to this range on Linux/Unix requires root privileges |
| 1024–49151 | Registered Ports | Registered Ports | Third-party applications register with IANA for use, e.g., MySQL (3306), RDP (3389) |
| 49152–65535 | Ephemeral Ports | Dynamic Ports | Temporary connections dynamically assigned by the operating system to clients, released after the connection ends |
IANA (Internet Assigned Numbers Authority)
Responsible for managing global internet number resources, including the allocation and registration of IP addresses, AS numbers, and Port numbers.
Developers usually do not need to apply for Port numbers from IANA; Ports in the Ephemeral Ports range can be used freely. Situations requiring application: developing a protocol or service intended to be public and become an industry standard, requiring application for a fixed Registered Port from IANA so other systems can identify which service the Port belongs to.
Common Service Ports
| Port | Protocol | Transport Layer | Description |
|---|---|---|---|
| 20 | FTP-DATA | TCP | FTP data transmission |
| 21 | FTP | TCP | FTP control connection |
| 22 | SSH | TCP | Encrypted remote login and file transfer (SCP/SFTP) |
| 23 | Telnet | TCP | Plaintext remote login, deprecated and insecure |
| 25 | SMTP | TCP | Email transmission (server to server) |
| 53 | DNS | TCP/UDP | Domain name resolution (UDP for query, TCP for zone transfer) |
| 67/68 | DHCP | UDP | 67 for server side, 68 for client side, dynamic IP allocation |
| 80 | HTTP | TCP | Plaintext web transmission |
| 110 | POP3 | TCP | Email retrieval (deleted from server after download) |
| 143 | IMAP | TCP | Email retrieval (mail retained on server) |
| 161/162 | SNMP | UDP | 161 for query, 162 for Trap (active alert) |
| 389 | LDAP | TCP | Directory service query (plaintext) |
| 443 | HTTPS | TCP | TLS encrypted web transmission |
| 445 | SMB | TCP | Windows file sharing and Network Neighborhood |
| 465 | SMTPS | TCP | SMTP over TLS (email encrypted transmission) |
| 587 | SMTP Submission | TCP | Email client submission (requires authentication) |
| 636 | LDAPS | TCP | LDAP over TLS (encrypted directory query) |
| 853 | DoT | TCP | DNS over TLS (encrypted DNS query) |
| 993 | IMAPS | TCP | IMAP over TLS |
| 995 | POP3S | TCP | POP3 over TLS |
| 1433 | MSSQL | TCP | Microsoft SQL Server |
| 1521 | Oracle DB | TCP | Oracle Database |
| 3306 | MySQL | TCP | MySQL / MariaDB Database |
| 3389 | RDP | TCP | Windows Remote Desktop Protocol |
| 5432 | PostgreSQL | TCP | PostgreSQL Database |
| 6379 | Redis | TCP | Redis cache database (default no authentication, do not expose directly) |
| 8080 | HTTP Alt | TCP | HTTP alternative Port, commonly used for development or Proxy |
| 27017 | MongoDB | TCP | MongoDB Database |
TLS Positioning in OSI Model and HTTPS / HSTS
TLS (Transport Layer Security) spans L5 (Session Layer) and L6 (Presentation Layer) in the OSI model, but is classified as part of the application layer in the TCP/IP model. TLS is not an independent transport protocol, but provides encryption and identity authentication on top of TCP and below the application layer. Version evolution and security of each version are detailed in the Encryption Protocol Version Evolution Comparison Table.
HTTPS (HTTP over TLS): HTTP traffic is encrypted and transmitted via TLS, default Port 443.
HSTS (HTTP Strict Transport Security): The server informs the browser via the HTTP header Strict-Transport-Security that subsequent requests to the domain must use HTTPS, preventing SSL Stripping attacks (attackers downgrading HTTPS to HTTP).
| HSTS Directive | Description |
|---|---|
max-age=31536000 | Time (in seconds) for the browser to remember this policy; one year in this example. |
includeSubDomains | All subdomains also force HTTPS. |
preload | Adds the domain to the browser's built-in HSTS Preload List, eliminating the HTTP window before the first connection. |
Joining the HSTS Preload List
General HSTS requires the user to successfully connect once, and the browser will remember "this website must use HTTPS." Before that, the first connection might still go over HTTP, leaving a window for attack. The Preload List is a list built into browsers at the factory; once a domain is listed, anyone visiting for the first time will be forced to use HTTPS.
Application process:
- Add
Strict-Transport-Securityheader to HTTP response, must includepreload,includeSubDomains, andmax-ageof at least one year. - Submit domain application at hstspreload.org.
- Once approved, it is included in the Chromium source code, and then carried along when Chrome, Firefox, Edge, Safari update.
Note: includeSubDomains is a mandatory condition; all subdomains must support HTTPS. Once joined, if you want to remove it, you must apply separately and wait for the next browser version update to take effect; do not apply lightly.
SSL Stripping Attack
Attackers intercept the first HTTP request sent by the user, maintain an HTTPS connection with the server, and return a tampered HTTP page to the victim, causing account and password to be transmitted in plaintext. See SSL Stripping.
C# Example: HttpClient Configuring TLS 1.3
using System.Net.Http;
using System.Net.Security;
using System.Security.Authentication;
// Create HttpClientHandler specifying TLS version
SocketsHttpHandler handler = new() {
SslOptions = new() {
// Only allow TLS 1.2 and TLS 1.3
EnabledSslProtocols = SslProtocols.Tls12 | SslProtocols.Tls13,
},
};
using HttpClient client = new(handler);
// Send HTTPS request
HttpResponseMessage response = await client.GetAsync("https://example.com/api/data");
response.EnsureSuccessStatusCode();
string body = await response.Content.ReadAsStringAsync();
Console.WriteLine($"Response length: {body.Length} characters");In practice,
HttpClientlifecycle should be managed viaIHttpClientFactoryto avoid Socket exhaustion issues.
DNS Security
DNSSEC (DNS Security Extensions) adds digital signatures to DNS responses, ensuring the authenticity and integrity of query results, preventing DNS Spoofing and Cache Poisoning.
Why is DNSSEC needed?
The browser receives a DNS response saying "the IP of bank.com is 1.2.3.4," how does it confirm this isn't forged by an attacker?
- Domain signs itself:
bank.comuses a private key to generate an RRSIG signature for all DNS records and publishes the corresponding DNSKEY for verification. - Self-signing cannot be trusted: Anyone can generate a set of keys and sign them; having RRSIG and DNSKEY is not enough. The resolver also needs to confirm "whether this DNSKEY itself is legitimate."
- Parent issues credentials (DS record): The
.comTLD stores a DS record in its own zone, which is the hash of thebank.comDNSKEY. The resolver compares them; if they match, it means.comrecognizes this key as genuine. - Trace back up: Is the
.comDNSKEY legitimate? The Root Zone stores the DS record for.com, then compare further up. - Starting point of trust: The Root Zone's public key is pre-built into all operating systems and resolvers (Trust Anchor), the only link in the entire chain that does not require external verification.
Chain of Trust
DNSSEC Record Types
| Record Type | Full Name | Chinese | Description |
|---|---|---|---|
| RRSIG | Resource Record Signature | Resource Record Signature | Digital signature of DNS records |
| DNSKEY | DNS Key | DNS Key | Stores the public key of the Zone, used to verify RRSIG |
| DS | Delegation Signer | Delegation Signer | Hash of the child zone's public key by the parent zone, establishes chain of trust |
| NSEC / NSEC3 | Next Secure | Next Secure Record | Proves a DNS record does not exist; NSEC3 adds salted hash to prevent zone enumeration |
Windows / Linux DNS Query Command Examples
# Windows / Linux common: Query basic A record and MX record
nslookup example.com
nslookup -type=MX example.com
# Windows: Query A record of specified DNS Server
Resolve-DnsName -Name example.com -Server 1.1.1.1 -Type A -DnsOnly
# Windows: View DNSKEY published by the domain
Resolve-DnsName -Name example.com -Type DNSKEY -DnsOnly
# Windows: View DNSSEC signature records
Resolve-DnsName -Name example.com -Type RRSIG -DnsOnly# Linux: Basic query and concise output
dig example.com
dig example.com MX
dig example.com +short
# Linux: Query A record from specified DNS Server
dig @1.1.1.1 example.com A
# Linux: View DNSKEY published by the domain
dig example.com DNSKEY
# Linux: View DS record of parent zone
dig com DSResolve-DnsNamecan directly specify-Server,-Type,-DnsOnly, suitable for basic DNS and DNSSEC troubleshooting on Windows systems.dig @server name typeis the most common basic format, e.g.,dig @8.8.8.8 example.com MXmeans "query a certain record from the specified DNS server."nslookupis suitable for basic queries; when observing DNSSEC records or more complete response fields,Resolve-DnsNameanddigwill be clearer.
DNSSEC vs DNS Encryption Protocols Comparison:
| Protocol | Purpose | Description |
|---|---|---|
| DNSSEC | Integrity + Authenticity | Verifies DNS response has not been tampered with, query content remains plaintext |
| DoH (DNS over HTTPS) | Encrypt Query | Encapsulates DNS queries in HTTPS requests, prevents query content from being eavesdropped, Port 443 |
| DoT (DNS over TLS) | Encrypt Query | Encrypts DNS queries with TLS, Port 853 |
Difference between DNSSEC and DoH / DoT
DNSSEC does not encrypt query content: Queries and responses remain plaintext, only ensuring the response has not been tampered with. If eavesdropping prevention is needed, it must be paired with DoH or DoT.
VPN (Virtual Private Network) Types and Protocols Comparison Table
VPN Connection Types and Technical Principles
| Type | Description | Typical Protocol | Applicable Scenario |
|---|---|---|---|
| Site-to-Site VPN | Interconnects two fixed networks via encrypted tunnel (e.g., Head Office ↔ Branch Office) | IPsec (Tunnel Mode) | Branch interconnection, cross-region data centers |
| Remote Access VPN | Individual users connect back to the enterprise internal network from outside | SSL/TLS VPN, IPsec | Remote work, business travelers accessing internal resources |
Site-to-Site VPN vs Remote Access VPN Technical Principle Differences
| Technical Aspect | Site-to-Site VPN | Remote Access VPN |
|---|---|---|
| Connection Architecture | Gateway-to-Gateway: VPN gateway devices at both ends are responsible for establishing the encrypted tunnel, internal hosts are unaware | Host-to-Gateway: User-side device directly establishes a connection with the enterprise VPN server |
| Tunnel Establishment | Persistent Tunnel: Once configured, the tunnel exists permanently (Always-On), maintaining connection status regardless of data transmission | On-Demand: Tunnel is established only when the user connects, and disappears after disconnection, supports multiple users connecting simultaneously but independently |
| Network Topology | Bridge Mode: "Bridges" two regional networks in different locations into one large logical network, devices at both ends can communicate directly using internal IPs | Routing Mode: User obtains a virtual IP (usually IP Pool assigned by VPN server), uses routing table to determine which traffic goes through VPN |
| IP Address Assignment | Static Routing: Subnets at both ends are fixed and non-overlapping (e.g., A side uses 192.168.1.0/24, B side uses 192.168.2.0/24), routing rules pre-configured | Dynamic IP Assignment: Dynamically assigns virtual IPs from a DHCP Pool to connected users, supports IP address reuse |
| Authentication Mechanism | Device Authentication: Based on Pre-Shared Key (PSK), digital certificates, or identity verification of both gateways, authenticates devices rather than individual users | User Authentication: Based on username/password, digital certificates, or multi-factor authentication (MFA), each connected user must be independently verified |
| Scalability Consideration | Low Scalability: Depends on topology choice for multi-site, see table below | High Scalability: VPN server can serve thousands of concurrent connections, only need to increase server computing power |
| Fault Impact Scope | Large Single Point of Failure Impact: Gateway failure at one end completely interrupts network interconnection between the two locations, affecting all users at that location | Small Single Point of Failure Impact: Individual user connection issues do not affect others, VPN server failure can be backed up by multiple servers |
Site-to-Site VPN Multi-Site Topology
| Hub-and-Spoke | Full Mesh | |
|---|---|---|
| Architecture | All branch sites only establish tunnels with the central Hub | Each site establishes tunnels directly with all other sites |
| Connections | N-1 | N×(N-1)/2 |
| Traffic Path | Branch-to-branch traffic must detour through Hub | Direct connection between sites, no detour |
| Management Complexity | Low, centralized in Hub configuration | High, connection count grows rapidly as site count increases |
| Latency | Higher (one extra hop) | Lower (direct connection) |
| Single Point of Failure | Hub failure interrupts communication between all branches; in practice, Hub needs HA (High Availability) backup | Failure of any site only affects itself, other sites can still connect directly |
| Applicable Scenario | Many branches, Hub bandwidth sufficient | Few sites, latency-sensitive or high traffic |
VPN Protocol Comparison
| Protocol | Operating Layer | Encryption Method | Characteristics |
|---|---|---|---|
| IPsec | L3 Network Layer | ESP (AES + HMAC) | Industry standard, suitable for Site-to-Site; supports Transport / Tunnel dual modes, see IPsec Modes Comparison Table. |
| SSL/TLS VPN | L4-L7 | TLS | Can be used via browser or lightweight Client, no need to install dedicated software; suitable for Remote Access. |
| WireGuard | L3 Network Layer | ChaCha20 + Poly1305 | Modern lightweight protocol, extremely small code base (~4000 lines), performance better than IPsec / OpenVPN. |
Split Tunneling vs Full Tunneling
| Mode | Behavior | Pros | Cons |
|---|---|---|---|
| Full Tunneling | All traffic goes through VPN tunnel | High security, all traffic protected by enterprise security policy | High bandwidth consumption, affects performance |
| Split Tunneling | Only enterprise resource traffic goes through VPN, others go through local network | Saves bandwidth, better user experience | Lower security, local traffic not protected |
VPN and Zero Trust
- Traditional VPN's Full Tunneling ensures all traffic is inspected, but in a Zero Trust Architecture, every resource has independent access control (PEP), weakening the role of VPN.
- Zero Trust does not necessarily cancel VPN, but VPN is no longer the only trust boundary.
VPN Protocol Details Supplement
IPsec IKE Two-Phase Negotiation
| Phase | Name | Purpose | Output |
|---|---|---|---|
| Phase 1 | IKE SA Establishment | Both parties negotiate encryption algorithms, verify identity, establish secure management channel | ISAKMP SA (Internet Security Association and Key Management Protocol SA) |
| Phase 2 | IPsec SA Establishment | Within the secure channel of Phase 1, negotiate encryption parameters for actual data transmission | IPsec SA (a pair of unidirectional SAs, each with an SPI identifier) |
- Phase 1 has two modes: Main Mode (6-step exchange, safer) and Aggressive Mode (3-step exchange, faster but identity protection is weaker).
- Phase 2 uses Quick Mode, can negotiate multiple sets of IPsec SAs.
- IKEv2 simplifies the negotiation process, merging Phase 1 + Phase 2 into 4 message exchanges (IKE_SA_INIT + IKE_AUTH).
WireGuard Technical Characteristics
- Based on Noise Protocol Framework, uses a fixed combination of cryptography (ChaCha20, Poly1305, Curve25519, BLAKE2s), no need to negotiate encryption algorithms.
- Adopts Fixed Public Key Pairing: Each Peer pre-configures the other's public key, simplifying the identity verification process.
- Uses UDP only for transmission, no TCP mode.
- Core code is about 4,000 lines (vs OpenVPN ~100,000 lines), easy for security auditing.
- Connection establishment time is usually within 100ms (IPsec / OpenVPN usually takes seconds).
SSL/TLS VPN Two Modes
| Mode | Description | Applicable Scenario |
|---|---|---|
| Clientless | Access Web applications via browser, no software installation required | Temporary access, partners, BYOD devices |
| Full-tunnel Client | Install dedicated client, all traffic transmitted via TLS tunnel | Remote employees requiring full network access |
- Clientless mode only supports Web applications and some protocols (e.g., RDP over HTML5), functionality is limited but deployment is simplest.
- Full-tunnel Client provides functionality equivalent to IPsec VPN, but has stronger ability to traverse firewalls via TLS (using TCP 443 Port).
IPsec Modes Comparison Table
| Aspect | Transport Mode | Tunnel Mode |
|---|---|---|
| Encryption Scope | Encrypts Payload only (IP header not encrypted) | Encrypts the entire original IP packet (including header), then wraps with a new IP header |
| IP Header Visibility | Original IP header retained, real source and destination IP visible | Original IP header encrypted and hidden, outer header shows VPN gateway IP |
| Typical Use | End-to-End (Host-to-Host) communication | Gateway-to-Gateway (Site-to-Site VPN) or Remote Access VPN |
| Security | Lower (attacker can know the IPs of both communicating parties) | Higher (original IP hidden, only VPN gateway IP visible) |
| Packet Structure | [Original IP Header][IPsec Header][Encrypted Payload] | [New IP Header][IPsec Header][Encrypted (Original IP Header + Payload)] |
TIP
- AH (Authentication Header): Provides integrity verification and source authentication only, does not provide encryption.
- ESP (Encapsulating Security Payload): Provides encryption + integrity verification + source authentication.
- In practice, most VPNs use ESP + Tunnel Mode.
- Transport mode is common for secure communication between two hosts within the same local area network.
QUIC Protocol and HTTP/3
QUIC is a transport layer protocol developed by Google and standardized by IETF (RFC 9000). HTTP/3 uses QUIC as the underlying layer, replacing the TCP + TLS architecture of HTTP/2.
| Comparison Aspect | HTTP/2 (TCP + TLS) | HTTP/3 (QUIC) |
|---|---|---|
| Transport Layer | TCP | UDP (QUIC implements reliable transmission on top of UDP) |
| Encryption | TLS (layered, handshake independent) | Built-in encryption (QUIC handshake merged with TLS 1.3) |
| Connection Establishment | 3-way TCP + TLS handshake (1–2 RTT) | 0-RTT or 1-RTT (known servers can restore connection in 0-RTT) |
| Head-of-Line Blocking (HOL Blocking) | Exists at TCP layer (one packet loss blocks all streams) | None (each stream is independent, single packet loss does not affect other streams) |
| Connection Migration | IP change requires connection reconstruction | Uses Connection ID, no disconnection when changing IP / changing network (e.g., mobile phone switching from Wi-Fi to mobile network) |
| Firewall/NAT Traversal | TCP 443 widely allowed | UDP 443, may be blocked in some environments |
Security Considerations:
- QUIC forces the use of TLS 1.3, cannot downgrade to weaker encryption versions, security is better than negotiable TLS 1.2.
- Due to the use of UDP, traditional firewalls based on TCP connection state may not be able to deeply inspect QUIC traffic, posing visibility challenges for enterprises.
- 0-RTT data may be subject to Replay Attacks; the server side must implement Idempotent protection for 0-RTT requests.
TIP
- RTT (Round-Trip Time): The time it takes for a packet to be sent and a response received. Higher RTT means greater latency. Establishing a connection requires several round trips, requiring several RTTs of waiting time.
- TCP Three-way Handshake: TCP must complete three steps (SYN → SYN-ACK → ACK) before sending data, consuming 1 RTT. HTTPS adds TLS handshake on top, totaling 2 RTTs before data transmission can begin.
- Head-of-Line Blocking (HOL Blocking): TCP requires packets to arrive in order; one packet loss makes all subsequent packets wait for retransmission, even if other data streams are completely normal, they will be blocked.
- Jitter (Transmission Latency Variation): The inconsistency of packet arrival time. RTT is "average round-trip latency," Jitter is the "fluctuation amplitude of latency." Real-time communications like VoIP and video conferencing are extremely sensitive to Jitter; high Jitter leads to choppy audio or frozen screens. Attackers can deliberately create massive Jitter through network congestion (e.g., DDoS) to reduce service quality or even trigger timeout disconnections.
NAC and 802.1X Authentication
NAC (Network Access Control) is a mechanism that performs two types of checks before a device connects to the network:
- Identity Authentication: Confirm whether the device or user has the right to enter the network, usually implemented via 802.1X.
- Posture Assessment: Confirm whether endpoint security settings meet policy, including OS patches, antivirus software enabled status, whether prohibited software (e.g., P2P download tools) is installed, and whether personal firewalls are enabled.
Devices that fail either check are isolated to a restricted VLAN, allowed only to access patch servers for self-repair, or directly denied connection.
NAC Architecture:
802.1X is responsible for the "Identity Authentication" part mentioned above. It is an IEEE standard Port-based Network Access Control, using EAP (Extensible Authentication Protocol) as the identity verification framework.
802.1X Three Roles:
| Role | English | Description | Common Implementation |
|---|---|---|---|
| Supplicant | Supplicant | Device or software requesting network access | Windows built-in 802.1X Client, wpa_supplicant (Linux) |
| Authenticator | Authenticator | Network device controlling Port switch | Enterprise-grade switches, wireless APs |
| Authentication Server | Authentication Server | Verifies identity and determines authorization | RADIUS server (e.g., FreeRADIUS, Microsoft NPS) |
802.1X Authentication Process:
EAP Common Methods Comparison:
EAP itself is just a framework; actual authentication strength depends on the chosen EAP method. The core difference lies in "which side requires a certificate" and "whether a TLS channel is established."
| EAP Method | Server Certificate | Client Certificate | Authentication Type | Characteristics |
|---|---|---|---|---|
| EAP-MD5 | ✗ | ✗ | Unidirectional (Server cannot be verified) | Weakest; cannot prevent man-in-the-middle attacks, no longer recommended |
| PEAP | ✓ | ✗ | Unidirectional (Verify Server) | Establishes TLS channel first, client authenticates via password (MSCHAPv2) inside the channel; most common in Windows enterprise environments |
| EAP-TTLS | ✓ | ✗ | Unidirectional (Verify Server) | Similar to PEAP, but supports more inner protocols (PAP, CHAP, MSCHAPv2, etc.); better cross-platform compatibility |
| EAP-TLS | ✓ | ✓ | Bidirectional (Mutual Authentication) | Both sides require PKI certificates; highest security, but deployment cost is highest (requires managing client certificates) |
| EAP-FAST | Optional | ✗ | Unidirectional (Default) | Proposed by Cisco; uses PAC (Protected Access Credential) instead of certificates, avoiding certificate management burden |
Private IP and CIDR Subnetting
Private IP Range (RFC 1918)
Private IPs are only routed within the organization; external communication requires NAT conversion to public IPs. Public IPs are assigned by IANA/ISP and are globally unique; private IPs are managed by the organization itself and can be reused across different organizations.
| Private Range | CIDR | Available Hosts | Common Scenario |
|---|---|---|---|
| 10.0.0.0 – 10.255.255.255 | 10.0.0.0/8 | 16,777,214 | Large enterprise internal network |
| 172.16.0.0 – 172.31.255.255 | 172.16.0.0/12 | 1,048,574 | Medium enterprise |
| 192.168.0.0 – 192.168.255.255 | 192.168.0.0/16 | 65,534 | Home/Small office |
CIDR Subnetting
In the CIDR prefix (/n), the first n bits are the network part, and the remaining (32 - n) bits are the host part.
Available hosts = 2^(32-n) - 2 (subtracting network address and broadcast address)
| CIDR | Host Bits | Available Hosts | Applicable Scenario |
|---|---|---|---|
| /22 | 10 | 1,022 | Large department (~1,000 units) |
| /24 | 8 | 254 | General office network segment |
| /26 | 6 | 62 | Small department |
| /28 | 4 | 14 | Small isolated network segment |
| /30 | 2 | 2 | Point-to-Point connection |
Security Significance of Subnetting
Shrinking the subnet shrinks the Blast Radius: if one segment is breached, the impact scope is limited to hosts within that subnet.
Practical example: Finance department 10 people → /28 (14 available IPs), even other departments in the same office building cannot connect directly.
Network Segmentation
Network segmentation cuts large networks into multiple smaller security zones, limiting lateral movement; even if an attacker invades one segment, they cannot directly access resources in other segments.
| Implementation | English | Layer | Description |
|---|---|---|---|
| VLAN | Virtual LAN | L2 | Establishes logically isolated Broadcast Domains on the same physical switch. Communication between different VLANs must pass through L3 routers or firewalls. |
| ACL | Access Control List | L3-L4 | Sets rules on routers or L3 switches to filter allowed or denied traffic based on IP address, Port number, and protocol. |
| DMZ | Demilitarized Zone | L3-L7 | Isolated buffer zone between external network (Internet) and internal network, usually places external services (Web server, Mail server, DNS). External can access DMZ, but cannot enter internal network directly; internal can access DMZ, but DMZ hosts cannot move laterally to internal network directly after being breached. |
| Firewall Zone | Firewall Zone | L3-L7 | Divides network into zones with different trust levels (e.g., Trust / Untrust / DMZ), cross-zone traffic is checked by firewall policy. |
| Microsegmentation | Microsegmentation | L2-L7 | In virtualized or cloud environments, sets security policies per workload (VM / Container), realizing core principles of Zero Trust architecture. |
VLAN Security
VLAN (Virtual Local Area Network) cuts out multiple logically independent network segments on the same physical switch, allowing traffic from different departments (e.g., Finance VLAN 10, Engineering VLAN 20) not to interfere with each other, even if they share the same physical line. The implementation standard is 802.1Q, which adds a 4-byte VLAN Tag to the Ethernet frame, allowing the switch to determine which network segment each packet belongs to.
Port Types
Ethernet frames sent by terminal devices (computers) do not carry VLAN tags themselves, but the switch must know which VLAN this frame belongs to when forwarding. The Port type determines how the switch handles this tag at each connection point.
| Type | Connection Object | Description |
|---|---|---|
| Access Port | Terminal device (computer, printer, IP Phone) | Belongs to a single VLAN. Terminal devices send Untagged Ethernet frames; the switch adds the 802.1Q Tag of the VLAN the Port belongs to when receiving (Ingress); strips the Tag when forwarding out (Egress). Terminal devices themselves are unaware of the existence of VLANs. |
| Trunk Port | Between switches, switch to router | Carries traffic for multiple VLANs simultaneously. Frames are transmitted in Tagged 802.1Q format, the Tag remains unchanged throughout the Trunk link, and the receiving device must be able to interpret the VLAN Tag and forward it to the corresponding VLAN accordingly. |
| Native VLAN | — | The VLAN to which untagged frames on a Trunk Port belong, defaults to VLAN 1, mainly used for compatibility with legacy devices that do not support 802.1Q. Attackers can use this mechanism to launch Double Tagging attacks (see next section), so the Native VLAN should be changed to an unused non-VLAN 1 number. |
Scenario Comparison
Access Port — Department Office Door
Employees (terminal devices) do not need to wear any identification stickers inside the office because everyone in the office is from the same department. When an employee walks out of the department door, the guard at the door (Access Port) will stick a department identification sticker (VLAN Tag) on their back; it is torn off when entering the door. Terminal devices do not need to know about the existence of stickers at all; sticking and tearing are handled entirely by the switch.
Trunk Port — Building Shared Elevator
People from the Finance department and Engineering department share the same elevator (Trunk Port). To let the guard on another floor identify which department everyone belongs to, everyone must stick on a department sticker (Tagged) before entering the elevator, and the elevator keeps the sticker on without tearing it off throughout the journey. Upon arrival, the guard at the receiving end (another switch) sees the sticker and knows which department to guide the person to.
Native VLAN — People Without Stickers
If someone walks into the elevator without any stickers (Untagged frame arrives at Trunk Port), the building has a default rule: everyone is classified as "default identity" (Native VLAN, defaults to VLAN 1). Attackers can use this rule to add two layers of tags to the frame; the outer layer matches the Native VLAN, the switch strips the outer layer, and the inner tag sends the traffic into an unauthorized VLAN (i.e., Double Tagging attack).
Frame Transmission Process
VLAN Hopping Attack
Attackers exploit VLAN configuration flaws to cross VLAN boundaries and access unauthorized network segments.
Double Tagging
Vulnerability Principle: Exploits the default behavior of switches to "automatically strip tags" when processing Native VLANs to smuggle malicious packets into unauthorized network segments. This is a one-way blind attack.
Trigger Conditions:
- Environment Topology: There must be 2 or more switches in the attack path, connected via a Trunk Port.
- Attacker Location: The Port where the attacker is located must belong to a VLAN that matches the Native VLAN (usually defaults to VLAN 1) of the Trunk.
Malicious Payload Structure:
The attacker self-forges a special frame with two 802.1Q tags: [Outer Tag: Native VLAN (e.g., VLAN 1)] + [Inner Tag: Target Segment (e.g., VLAN 10)] + [Malicious Data].
Attack Execution Pipeline:
- Switch 1 (Vulnerability Trigger Point): Receives the attacker's packet, prepares to send it into the Trunk channel. Upon discovering "Outer Tag = Native VLAN," it triggers the system default rule: strip the outer tag.
- Trunk (Smuggling Channel): The outer tag is torn off, the hidden
[Inner Tag: VLAN 10]is exposed and transmitted in this state within the Trunk. - Switch 2 (Victim Forwarding Node): Receives the packet from the Trunk. Switch 2 only sees
[Inner Tag: VLAN 10]and, following normal logic, forwards it to the target host in VLAN 10 without any defense.
This attack is naturally one-way: the target host's response frame follows the normal VLAN 10 path and cannot return to the attacker, so it is mostly used to trigger target behavior (e.g., ARP poisoning, service probing), rather than two-way data theft.
Switch Spoofing (Disguised Trunk)
Most switches have DTP (Dynamic Trunking Protocol) enabled by default, which automatically negotiates whether to establish a Trunk connection with the other side. The attacker's host sends DTP negotiation messages, inducing the switch to promote the Port to a Trunk Port. Once negotiation succeeds, the attacker's host begins receiving tagged frames for all VLANs, and VLAN isolation completely fails.
| Defense Measure | Defense Object | Description |
|---|---|---|
| Modify Native VLAN | Double Tagging | Change Native VLAN to an unused non-VLAN 1 number, making it impossible for the attacker to match the outer tag |
| Disable DTP | Switch Spoofing | Explicitly set Port to Access mode and disable auto-negotiation, prohibiting unauthorized devices from negotiating Trunk |
| Disable Unused Ports | Both | Set idle Ports to shutdown and assign to an isolated VLAN, reducing the attack surface |
SDN Security
In traditional networks, each switch and router contains its own Control Plane (decision logic) and Data Plane (packet forwarding). If security rules need updating, they must be configured device by device, and any missed device forms a gap. SDN decouples the control plane from the device and centralizes it into the SDN Controller, with devices only responsible for executing forwarding instructions.
Three-Layer Architecture
| Component | Description |
|---|---|
| Application Layer | Network applications (e.g., load balancing, firewall policies), pass strategies to the controller via Northbound API (usually REST API). |
| Control Layer | SDN Controller (e.g., OpenDaylight, ONOS), receives instructions from the application layer, calculates global forwarding rules, and issues them to devices. |
| Infrastructure Layer | Physical switches/routers, receive instructions from the controller via Southbound API (e.g., OpenFlow), do not make routing decisions themselves. |
Scenario Comparison
Traditional networks are like every floor in a building having its own guard room, each guard having their own manual of entry rules, making their own judgments on whether to let people pass. Updating rules means running to every guard room, and missing one creates a vulnerability.
SDN's approach is to set up a central control room (SDN Controller) and abolish the rule manuals in each guard room. Guards no longer make their own judgments; they report to the central control room upon encountering anyone, and the central control room makes a unified decision on whether to let them pass or intercept them, then transmits the instructions back to the guard for execution.
- Northbound API: Management personnel (application layer) issue policies to the central control room via an internal communication system (REST API), e.g., "block this IP."
- Southbound API: The central control room issues specific instructions to guards on each floor via a dedicated channel (OpenFlow).
Security Advantages:
- Centralized Security Policy: Define security rules uniformly on the controller and deploy them to all devices at once, avoiding inconsistencies caused by device-by-device configuration.
- Dynamic Response: When abnormal traffic is detected, the controller can modify global forwarding rules in real-time to isolate infected hosts without operating device by device.
- Network Visibility: The controller masters the global traffic state, facilitating anomaly detection and forensic analysis.
Security Risks:
- Controller is a Single Point of Failure: Once the controller is breached, the forwarding rules of the entire network are controlled. Must deploy HA (High Availability) clusters and strictly restrict controller access permissions.
- Northbound API Attack: If an attacker gains application layer access, they can issue malicious instructions to the controller via REST API (e.g., open all traffic, bypass firewall rules). Must implement API authentication and authorization.
- Southbound API Attack: If an attacker can insert forged OpenFlow messages, they can directly manipulate switch forwarding rules. Communication between the controller and devices must be forced to use TLS encryption.
IPv6 Security Considerations
IPv4 uses 32-bit addresses (approx. 4.3 billion), which have long been exhausted. IPv6 expands the address to 128 bits (approx. 3.4×10³⁸), which is the long-term replacement solution. Due to the massive IPv4 infrastructure, most networks are currently in a Dual Stack transition period, running IPv4 and IPv6 simultaneously. This parallel state brings new security considerations: if security policies are designed only for IPv4, IPv6 traffic becomes a blind spot.
IPsec is a mandatory support (but not mandatory to enable) in the IPv6 specification, providing native end-to-end encryption capability, an inherent security advantage that IPv4 does not have.
| Security Risk | Description |
|---|---|
| Dual Stack Risk | When IPv4 and IPv6 are enabled simultaneously, if IPv6 lacks corresponding security policies (firewall rules, IDS intrusion detection system signatures), attackers can bypass defenses targeted only at IPv4. |
| RA Forgery | IPv6 uses SLAAC for automatic address assignment, a process dependent on RA messages sent by routers. Attackers can send forged RAs, setting themselves as the default gateway, an effect similar to IPv4 ARP Spoofing. |
| IPv6 Tunnel Abuse | Transition mechanisms like Teredo, 6to4 encapsulate IPv6 packets in IPv4 for transmission, potentially bypassing firewalls and IDS that do not support IPv6. |
| Address Reconnaissance Difficulty | A /64 subnet in IPv6 contains 2⁶⁴ addresses, traditional IP-by-IP scanning is infeasible. Attackers switch to DNS queries, Multicast addresses, or EUI-64 (rules for deriving IPv6 addresses from device MAC addresses) for reconnaissance. |
| Defense Measure | Description |
|---|---|
| Block Unused IPv6 Traffic | If the organization has not deployed IPv6, explicitly block IPv6 traffic on the firewall, including Ports used by Teredo, 6to4 tunnels, to avoid becoming a security blind spot. |
| Dual Stack Policy Synchronization | If dual stack is deployed, firewall rules, IDS/IPS signatures, and log records must cover IPv4 and IPv6 simultaneously, not just protect one end. |
| Enable RA Guard | Enable RA Guard on switches to allow only legitimate routers to send RA messages, blocking forged Router Advertisements. |
💡 Terminology Quick Check
- SLAAC: Stateless Address Autoconfiguration — IPv6 devices do not need a DHCP server, automatically generating their own IP addresses via RA messages.
- RA: Router Advertisement — Messages broadcast periodically by routers to inform devices in the subnet of the gateway address and network prefix.
- EUI-64: Extended Unique Identifier (64-bit) — Converts a device's 48-bit MAC address into a 64-bit interface identifier, used for the host part of automatically generated IPv6 addresses.
- Teredo / 6to4: IPv6 over IPv4 Tunneling Transition Mechanism — Encapsulates IPv6 packets in IPv4 for transmission, allowing pure IPv4 environments to connect to IPv6 networks.
- RA Guard: Router Advertisement Guard — Switch function, allows only specified legitimate router Ports to send RA messages, preventing forgery.
- Dual Stack: Simultaneously enables IPv4 and IPv6 on the same device, currently the most common transition deployment method.
Wireless Network Encryption Comparison Table
The WPA (Wi-Fi Protected Access) series is a Wi-Fi security certification standard released by the Wi-Fi Alliance, starting from the structural vulnerabilities of WEP (Wired Equivalent Privacy), and has been strengthened generation by generation through WPA → WPA2 → WPA3 to enhance encryption and authentication mechanisms.
| Standard | Encryption Algorithm | Authentication Method | Key Length | Known Vulnerabilities | Current Recommendation |
|---|---|---|---|---|---|
| WEP | RC4 | Open/Shared Key | 40/104 bit | IV repetition attack, key can be cracked in minutes | Disable |
| WPA | TKIP (RC4 improvement) | PSK / 802.1X | 128 bit | TKIP has Michael attack, designed as a transition scheme for WEP | Not recommended |
| WPA2 | AES-CCMP | PSK / 802.1X | 128 bit | KRACK attack (Key Reinstallation Attack), can force reuse of Nonce | Still usable, but should upgrade to WPA3 |
| WPA3 | AES-GCMP / AES-CCMP | SAE / 802.1X | 128/192 bit | Dragonblood attack (patched) | Recommended |
Known Vulnerability Description
- IV Repetition Attack (WEP): WEP uses a 24-bit Initialization Vector (IV) combined with RC4 for stream encryption. The IV space is only 2²⁴ ≈ 16 million, which is exhausted quickly in busy networks and starts repeating. When two packets use the same IV, ciphertext XOR can cancel out the key stream, thereby restoring plaintext or reversing the key.
- Michael Attack (WPA/TKIP): TKIP (Temporal Key Integrity Protocol) is the encryption protocol for WPA, designed to strengthen WEP vulnerabilities without replacing old hardware. The underlying layer still uses RC4, but adds per-packet keys (to avoid IV repetition) and message integrity code Michael (to prevent forgery). Michael's design strength is insufficient; attackers can forge packets and pass verification in a short time; TKIP was an emergency transition scheme, and its overall design has inherent limitations, eventually replaced by WPA2's AES-CCMP.
- KRACK (WPA2): Key Reinstallation Attack. Attackers replay message 3 of the four-way handshake, forcing the client to reinstall the key and reset the Nonce to 0. After the Nonce repeats, the AES-CCMP encryption protection fails, and attackers can decrypt packets or forge data. See the four-way handshake explanation below.
- Dragonblood (WPA3/SAE): Side-channel attack against early implementations of SAE, inferring passwords by measuring timing differences or cache access behavior during the Dragonfly calculation process. The Wi-Fi Alliance has patched this in WPA3 Revision 1; current correct implementations are not affected.
WPA2 Four-Way Handshake
The four-way handshake confirms during connection that both parties hold the same PMK (Pairwise Master Key, derived from the passphrase), and negotiates the PTK (Pairwise Transient Key) used to encrypt unicast data and the GTK (Group Temporal Key) used for broadcast. The PMK itself is not transmitted over the network; both parties calculate it from the passphrase.
KRACK Attack Process
The attacker disguises as a man-in-the-middle, intercepts message 4 (ACK), causing the AP to mistakenly believe the client did not receive message 3, and the AP retransmits message 3 according to the protocol. Each time the client receives message 3, it reinstalls the key and resets the Nonce to 0. After the Nonce repeats, the same key encrypts different packets, and the attacker can restore the key stream, thereby decrypting or forging packets.
WPA3 and SAE
WPA2 vs WPA3 Core Differences:
| Aspect | WPA2 | WPA3 |
|---|---|---|
| Key Exchange | 4-Way Handshake (PSK) | SAE (Dragonfly protocol) |
| Offline Dictionary Attack | Feasible (offline brute force after capturing handshake packets) | Infeasible (each handshake requires interaction, cannot be offline brute-forced) |
| Forward Secrecy (PFS) | Not supported | Supported (SAE generates independent session keys each time) |
| Open Network | No encryption | OWE (Opportunistic Wireless Encryption) |
| Enterprise Security | WPA2-Enterprise | WPA3-Enterprise (192-bit CNSA suite) |
SAE (Simultaneous Authentication of Equals)
SAE is based on the Dragonfly key exchange protocol (RFC 7664). The core idea of Dragonfly is to map the password to a point on an elliptic curve (PWE, Password Element), and perform key exchange based on this; the password itself is never transmitted over the network, nor does it leave any material that can be used for offline comparison. Attackers must interact with the other party for a complete handshake for every guess, unlike WPA2 where they can capture packets once and offline brute-force the password infinitely.
SAE handshake is divided into two stages:
- Commit Stage: Both parties map the password and MAC address to a password element (PWE) on the elliptic curve, generate a set of random private values, calculate scalar and element, and exchange them. Even if an attacker captures these public values, they cannot verify password guesses without interacting with the other party.
- Confirm Stage: Both parties calculate a confirmation code based on the negotiated shared key and verify it with each other, ensuring both sides know the correct password, completing bidirectional identity verification.
OWE (Opportunistic Wireless Encryption)
OWE is used for open networks without passwords (e.g., coffee shop Wi-Fi). Traditional open networks have no encryption at all, and anyone on the same network can eavesdrop on traffic. OWE allows each client and AP to perform an ECDH key exchange during connection, establishing an independent encrypted channel for that connection, preventing passive eavesdropping.
OWE does not verify the AP's identity (open networks have no password to bind the AP), so it cannot prevent Evil Twin Attacks, only providing eavesdropping protection, not identity verification.
WPA3 Version Description:
| Version | Applicable Scenario | Core Characteristics |
|---|---|---|
| WPA3-Personal | Home / Small Office | SAE replaces PSK, provides PFS |
| WPA3-Enterprise | Government / High-security environments | Supports 192-bit security suite (CNSA, Commercial National Security Algorithm) |
| OWE | Public Wi-Fi (Open Network) | Establishes independent encrypted channel for each user, no password required |
Purdue Model (OT / ICS Layered Security Architecture)
The Purdue Model (Purdue Enterprise Reference Architecture, PERA) is an industrial control system (ICS / OT) security layered reference architecture, dividing the OT environment into six levels (Level 0–5) based on function, explicitly specifying requirements for equipment, functions, and security isolation at each level.
| Level | Name | Equipment / Function | Description |
|---|---|---|---|
| Level 0 | Field Devices | Sensors, Actuators, Motors | Bottom-level hardware directly controlling physical processes |
| Level 1 | Control | PLC (Programmable Logic Controller), RTU (Remote Terminal Unit), DCS | Receives sensor signals, controls actuators based on logic |
| Level 2 | Supervisory | SCADA (Supervisory Control and Data Acquisition), HMI (Human Machine Interface) | Operator interface, monitors and controls Level 1 equipment; security incidents often spread from this level |
| Level 3 | Manufacturing Operations | MES (Manufacturing Execution System), Historian data collection server | Scheduling, production tracking, recording manufacturing data |
| Level 3.5 (iDMZ) | Industrial DMZ | Proxy, Jump Server, Data Diode, Firewall | Buffer isolation zone between OT (Level 3) and IT (Level 4); ensures there is absolutely no direct network connection between IT and OT, all data exchange must pass through proxy servers or jump servers in the iDMZ, effectively preventing IT-layer malware from directly entering the OT core. |
| Level 4–5 | Enterprise and External Network | ERP, business systems, Internet | Traditional IT environment |
OT Security Key Principles
- The boundary between Level 1 (PLC/RTU) and Level 2 (SCADA/HMI) is the lateral movement path most often exploited by attackers: after invading SCADA (Level 2), they can issue commands downward, directly affecting physical processes at Level 1 (e.g., Ukraine power grid attack incident).
- iDMZ (Industrial DMZ): Corresponds to Level 3.5, is a necessary buffer isolation zone between OT and IT; data should be transmitted unidirectionally through proxy servers or data diodes in the iDMZ to prevent external threats from entering OT in reverse.
- Purdue vs Zero Trust: Zero Trust requires verifying identity for every request, but many PLC devices in OT environments do not have authentication capabilities, so Zero Trust in OT environments is mainly implemented in network boundary control between Level 3–4.
💡 Terminology Quick Check
- IT: Information Technology — Computer systems that process data, communication, and business logic, e.g., servers, databases, ERP. The core difference from OT is that IT prioritizes Confidentiality, while OT prioritizes Availability.
- OT: Operational Technology — Hardware and software systems that directly control physical equipment and industrial processes.
- ICS: Industrial Control System — The upper-level collective term for OT, covering control systems like PLC, SCADA.
- PLC: Programmable Logic Controller — Industrial computer that receives sensor signals and controls actuators based on set logic.
- RTU: Remote Terminal Unit — Data acquisition and control equipment deployed in the field, communicating with SCADA.
- DCS: Distributed Control System — Industrial control architecture that distributes control functions to multiple nodes, commonly used in large manufacturing plants.
- SCADA: Supervisory Control and Data Acquisition — System that centrally monitors multiple field devices, operators monitor and control via HMI interface.
- HMI: Human Machine Interface — Graphical interface for operators to interact with industrial control systems.
- MES: Manufacturing Execution System — Information system that manages production scheduling and tracks manufacturing progress.
- ERP: Enterprise Resource Planning — Management system that integrates enterprise resources such as finance, HR, supply chain, etc., belongs to the IT side.
- Data Diode: Hardware device that only allows data to flow unidirectionally, commonly used at the OT / IT boundary to prevent external threats from entering OT in reverse.
Linux / Windows Network Security Tools Comparison Table
Firewall Configuration
| Function | Windows | Linux |
|---|---|---|
| Built-in Firewall | Windows Defender Firewall | iptables / nftables / firewalld |
| CLI Management | netsh advfirewall / PowerShell New-NetFirewallRule | iptables / nft / firewall-cmd |
| GUI Management | wf.msc (Windows Firewall with Advanced Security) | Cockpit Web Console / UFW (Uncomplicated Firewall) |
| Rule Persistence | Automatically saved | iptables requires iptables-save / iptables-restore; firewalld persists with XML configuration files |
Linux Firewall Evolution
- iptables: Classic packet filtering tool for Linux 2.4+, based on the Netfilter framework. Rules are organized by Chain and Table.
- nftables: Introduced in Linux 3.13+, successor to iptables. Syntax is more unified, performance is better, and it has gradually replaced iptables.
- firewalld: Default dynamic firewall management tool for RHEL / CentOS / Fedora, can use iptables or nftables at the bottom. Supports Zone concept and real-time rule changes.
iptables / nftables Firewall Rule Examples
iptables Common Rules
# Allow established connections and related traffic
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Allow SSH (Port 22)
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Allow HTTPS (Port 443)
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
# Default drop all inbound traffic
iptables -P INPUT DROP
# Save rules (Debian/Ubuntu)
iptables-save > /etc/iptables/rules.v4nftables Equivalent Rules
# Create table and chain
nft add table inet filter
nft add chain inet filter input { type filter hook input priority 0 \; policy drop \; }
# Allow established connections
nft add rule inet filter input ct state established,related accept
# Allow SSH and HTTPS
nft add rule inet filter input tcp dport { 22, 443 } accept
# Save rules
nft list ruleset > /etc/nftables.confNetwork Diagnostic Tools
| Function | Windows | Linux | Description |
|---|---|---|---|
| IP Config Query | ipconfig / ipconfig /all | ip addr / ifconfig (deprecated) | View IP, subnet mask, gateway of network interface cards. |
| Connection Status | netstat -an | ss -tulnp / netstat -tulnp | View current TCP/UDP connections and listening Ports. ss is the modern replacement for netstat. |
| Route Trace | tracert | traceroute / mtr | Trace the route jump path of packets to the destination. mtr combines ping and traceroute. |
| DNS Query | nslookup / Resolve-DnsName | dig / nslookup | Use nslookup for general queries; for DNSSEC records, refer to the centralized examples in the DNS Security section. |
| ARP Table Query | arp -a | ip neigh / arp -a | View the local ARP cache. |
Common Network Diagnostic Command Examples
# Trace route (find where packets are dropped or latency spikes)
tracert 8.8.8.8 # Windows
traceroute 8.8.8.8 # Linux
# View listening Ports (confirm which services are exposed externally)
netstat -ano | findstr LISTENING # Windows
ss -tulnp # Linux (-t TCP, -u UDP, -l listening, -n no name resolution, -p show process)0.0.0.0 / 127.0.0.1 / localhost Easily Confused Security Traps
| Address | Semantics | Security Risk |
|---|---|---|
127.0.0.1 | Loopback address, packets do not leave the host | Service is limited to local access, cannot be connected from outside. |
localhost | Hostname, defaults to 127.0.0.1 (IPv6 environment is ::1) | /etc/hosts can be tampered with to point to other addresses; in dual-stack environments, if firewall rules only block 127.0.0.1, ::1 might be missed. |
0.0.0.0 | Server binding context: listens on all network interfaces, including external interfaces | Misusing 0.0.0.0 during development exposes services intended only for local access directly to the external network. |
Common traps:
- Database (Redis, MySQL) in development environment bound to
0.0.0.0and then put online, lacking external firewall protection, directly exposed to the public network. 127.0.0.1inside a container is limited to the container itself, the host cannot access it; if host access is needed, explicitly bind to0.0.0.0or the host IP, and pair with firewall to restrict source IP.
Network Sniffing and Packet Capture
| Tool | Platform | Description |
|---|---|---|
| Wireshark | Windows / Linux / macOS | GUI packet analysis tool, supports parsing hundreds of protocols. |
| tcpdump | Linux / macOS | CLI packet capture tool, lightweight and efficient, suitable for server environments. |
| tshark | Windows / Linux / macOS | CLI version of Wireshark, suitable for script automation. |
tcpdump Common Commands
# Capture all traffic on eth0 interface
tcpdump -i eth0
# Capture only HTTPS traffic (Port 443)
tcpdump -i eth0 port 443
# Capture traffic for a specific host
tcpdump -i eth0 host 192.168.1.100
# Capture TCP SYN packets (first step of three-way handshake)
tcpdump -i eth0 'tcp[tcpflags] & tcp-syn != 0'
# Save capture results to pcap file (for Wireshark analysis)
tcpdump -i eth0 -w capture.pcap -c 1000
# Read pcap file
tcpdump -r capture.pcap
# Exclude SSH traffic (avoid capturing own remote connection)
tcpdump -i eth0 not port 22Wireshark Display Filter Common Syntax
# Filter by protocol
http
dns
tls
tcp
arp
# Filter by IP address
ip.addr == 192.168.1.100
ip.src == 10.0.0.1
ip.dst == 172.16.0.0/12
# Filter by Port
tcp.port == 443
tcp.dstport == 80
# Filter by HTTP method
http.request.method == "POST"
# Filter by DNS query name
dns.qry.name == "example.com"
# Filter by TLS version
tls.handshake.version == 0x0304 # TLS 1.3
# Filter by TCP Flag
tcp.flags.syn == 1 && tcp.flags.ack == 0 # SYN (new connection)
tcp.flags.reset == 1 # RST (connection reset)
# Combined conditions
ip.addr == 192.168.1.0/24 && tcp.port == 443 && tls- Display Filter: Filter display among captured packets, syntax as above.
- Capture Filter: Filter during capture, syntax is BPF (Berkeley Packet Filter), same as tcpdump (e.g.,
host 192.168.1.100 and port 443).
Attack Techniques
Malware and Attack Chains
Malware Type Comparison Table
| Type | Independent Survival (Needs Host?) | Spread and Trigger Method | Main Purpose and Characteristics | Typical Example / Note |
|---|---|---|---|---|
| Virus | No (needs host file) | Requires user to click or execute host file | Destroy system, infect other files | Macro virus, CIH |
| Worm | Yes (independent executable) | Actively scans network vulnerabilities, self-replicates and spreads | Consumes network bandwidth and system resources | Blaster, SQL Slammer |
| Trojan | Yes (disguised as normal software) | Deceives users into actively downloading and installing | Steal secrets, open backdoors for remote control | Banking Trojan, RAT (Remote Access Trojan) |
| Ransomware | Yes | Phishing emails, vulnerabilities, or Trojan implantation | Encrypts files, demands ransom for decryption | WannaCry (has worm-like spread characteristics), LockBit |
| Spyware | Yes | Bundled with free software or malicious web pages | Keylogging, monitoring browsing behavior, stealing passwords | Keylogger |
| Logic Bomb | No (usually implanted in normal programs) | Triggers when specific conditions are met (e.g., specific date, specific operation) | Internal employee revenge, timed destruction | Malicious database deletion script |
OFAC Sanctions List Verification Obligation before Ransomware Payment
OFAC (Office of Foreign Assets Control) maintains a sanctions list (SDN List), listing countries, organizations, and individuals sanctioned by the US.
For enterprises headquartered or operating within the US (or involving USD transactions):
- If you decide to pay the ransom, you must first verify whether the attacker/payee is on the OFAC sanctions list.
- Providing funds to sanctioned hostile hacker organizations or terrorists is a serious federal felony, which may result in huge fines or even criminal prosecution, regardless of whether it was intentional (strict liability).
- Even if they are not on the sanctions list, it is recommended to voluntarily disclose to OFAC after payment (voluntary disclosure can mitigate liability).
Confirming whether the target is on the list is not the primary purpose: Enterprises do not pay to ensure the attacker will decrypt, but to avoid becoming a source of funds for sanctioned entities; the latter belongs to the scope of negotiation and technical assessment.
Virus Advanced Variant Technology Comparison
- Polymorphic Virus: Changes its own encryption signature every time it infects, but the decrypted malicious core code remains the same. The goal is to evade signature-based scanning by traditional antivirus software.
- Metamorphic Virus: Rewrites its own code every time it infects (e.g., replacing instructions, inserting junk code), the appearance and structure are completely different, but the malicious behavior executed is the same, making it the most difficult type to detect.
Packing & Obfuscation Technology
Antivirus software's static analysis scans files on the hard disk, comparing signatures of known malicious programs. Packing makes the appearance on the hard disk completely different from the real code, thereby bypassing scanning:
Packing Process:
- Packer compresses/encrypts the original malicious code and merges it with the external shell program (Stub) responsible for unpacking into a new executable file.
- When antivirus scans, it sees compressed garbled code + Stub, finds no malicious signature → lets it pass.
- When the user executes, Stub runs first, unpacks/decrypts the original code and loads it into memory.
- Malicious code appears and executes only in memory.
| Technology | Operation Method | Attacker Goal | Common Tools/Techniques |
|---|---|---|---|
| Packing | Original code compressed/encrypted and wrapped in Stub, loaded into memory by Stub upon execution | Make the static appearance on the hard disk different from the real code, bypassing signature scanning | UPX (general compression), Themida (commercial-grade protection), custom Packer |
| Obfuscation | Rewriting code structure (e.g., shuffling execution flow, inserting junk code), execution result remains unchanged | Increase time cost for reverse engineering and manual analysis | Variable renaming, Control Flow Flattening, junk code insertion |
| Crypter | Advanced form of packing, protects Payload with high-strength encryption, decrypts dynamically upon execution | Achieve FUD (Fully Undetectable), each generated file has a different signature | Custom Crypter |
How to detect packing: Packed executables will show three anomalies on the hard disk: content looks like garbled code (loses regularity after compression/encryption), normal program error messages and API names disappear, and the list of Windows function calls is extremely simplified (Stub only needs a few APIs to load the real program into memory, other calls appear after unpacking).
Countermeasures: Sandbox dynamic analysis (let it run and observe behavior), Memory Dump (analyze after unpacking), behavioral detection instead of signature comparison.
Cyber Kill Chain Comparison Table
The Cyber Kill Chain proposed by Lockheed Martin breaks down APT (Advanced Persistent Threat) attacks into 7 stages:
| Stage | English | Chinese | Description | Typical Activity |
|---|---|---|---|---|
| 1 | Reconnaissance | Reconnaissance | Collect target information | OSINT (Open-Source Intelligence), social media investigation, scanning public services |
| 2 | Weaponization | Weaponization | Create attack tools | Combine vulnerability exploits with Payload into deliverable weapons (e.g., malicious PDF) |
| 3 | Delivery | Delivery | Deliver weapons to target | Phishing emails, malicious websites, USB drop |
| 4 | Exploitation | Exploitation | Trigger vulnerability | Exploit software vulnerabilities, zero-day attacks, user clicks malicious attachments |
| 5 | Installation | Installation | Implant persistent backdoor | Install RAT (Remote Access Trojan), create scheduled tasks, modify registry keys |
| 6 | Command & Control (C2) | Command & Control | Establish remote control channel | Connect back to C2 server, receive instructions |
| 7 | Actions on Objectives | Actions on Objectives | Achieve final goal | Data theft, data destruction, lateral movement, ransomware |
Cyber Kill Chain vs MITRE ATT&CK
| Comparison Aspect | Cyber Kill Chain | MITRE ATT&CK |
|---|---|---|
| Structure | Linear 7-stage chain | Tactics × Techniques matrix |
| Granularity | High-level attack flow | Fine-grained techniques and sub-techniques |
| Applicable Scenario | Understanding overall attack flow, post-incident retrospective | Writing detection rules, red/blue team drills |
| Maintenance | Proposed by Lockheed Martin, updated less frequently | Continuously updated by MITRE, community contribution |
- Kill Chain defense thinking: Interrupting the attack chain at any stage can stop the attack; the earlier the stage, the lower the cost.
- ATT&CK defense thinking: Build detection rules for each technique to increase attacker costs and exposure risks.
- Complementary: Use Kill Chain to understand the overall attack, use ATT&CK to build fine-grained detection capabilities.
- Vulnerability exploitation techniques corresponding to each stage of Kill Chain: Exploitation stage commonly uses Buffer Overflow, ROP (Return-Oriented Programming); Installation stage commonly uses DLL Side-Loading.
Parentheses contain MITRE ATT&CK official tactic IDs (Tactic ID), which can be queried at attack.mitre.org by ID for all specific techniques under that tactic.
| Kill Chain Stage | Corresponding ATT&CK Tactic | Supplement |
|---|---|---|
| Reconnaissance | Reconnaissance (TA0043) | One-to-one correspondence, collecting target information. |
| Weaponization | Resource Development (TA0042) | Obtain or build attack infrastructure and tools. |
| Delivery | Initial Access (TA0001) | Phishing, vulnerability exploitation, supply chain attacks, etc. |
| Exploitation | Execution (TA0002), Defense Evasion (TA0005) | Execute code after triggering vulnerability, and evade detection. |
| Installation | Persistence (TA0003), Privilege Escalation (TA0004) | Implant backdoor and escalate privileges to maintain access. |
| C2 | Command and Control (TA0011) | Establish remote control channel. |
| Actions on Objectives | Collection (TA0009), Exfiltration (TA0010), Impact (TA0040) | Collect, exfiltrate data, or cause damage. |
MITRE Defense-Side Frameworks:
In addition to ATT&CK (attacker perspective), MITRE has two defense-side frameworks:
| Framework | Positioning | Description |
|---|---|---|
| MITRE D3FEND | Encyclopedia for defenders | Organizes defense techniques in a knowledge graph and precisely maps to ATT&CK attack techniques; each D3FEND technique indicates which ATT&CK TTPs it can defend against; content covers Harden, Detect, Isolate, Deceive, Evict. |
| MITRE ENGAGE | Active defense and adversarial engagement | Focuses on Deception and adversarial engagement, guiding defenders on how to use decoys (Honeypot / Honey Token), misleading information, etc., to actively expose TTPs, consume attack resources, and collect threat intelligence. |
Positioning Distinction of the Three
- ATT&CK: Describes "how the attacker attacks," for red teams to design attack chains, blue teams to build detection rules.
- D3FEND: Describes "how the defender defends," is the defense-side mapping of ATT&CK.
- ENGAGE: Describes "how the defender lures and counters," emphasizing active inducement of attackers into observable environments.
- Distinguishing tips: If the question says "defense knowledge base mapping ATT&CK attack techniques in a graph" → D3FEND; if it says "framework for network deception and adversarial engagement" → ENGAGE.
Web and API Attacks
Cross-Site Scripting (XSS) Comparison Table
| Type | Malicious Script Storage Location | Trigger Condition | Impact Scope | Defense Core Focus |
|---|---|---|---|---|
| Reflected XSS | URL parameter (not stored on server) | Deceive user into clicking a link with malicious Payload | Single user who clicked the link | Validate and filter HTTP request parameters |
| Stored XSS | Database or file system (permanently stored on server) | Any user browsing the infected page (e.g., message board, forum) | All users browsing the page (most lethal) | Output Encoding, filter input content |
| DOM-based XSS | Frontend browser DOM environment (server completely unaware) | Triggered when frontend JavaScript reads malicious source and dynamically modifies DOM | Single user who triggered the DOM operation | Avoid high-risk JS APIs like innerHTML |
Common Confusion: XSS / CSRF / SSRF
The three names are similar but the attack targets and mechanisms are completely different:
| Comparison Aspect | XSS (Cross-Site Scripting) | CSRF (Cross-Site Request Forgery) | SSRF (Server-Side Request Forgery) |
|---|---|---|---|
| Attack Target | User's browser | User's authenticated Session | Server itself |
| Core Mechanism | Inject malicious script into web page, execute in user's browser | Use Cookie already saved in user's browser, forge request to target website | Induce server to issue requests to internal or external resources |
| Prerequisites | Website does not correctly filter/encode output | User is logged into target website and Session is valid | Server accepts URL provided by user and issues request |
| Attacker Capability | Steal Cookie/Session, page defacement, keylogging | Execute operations as the victim (e.g., transfer, change password) | Access internal services (e.g., cloud metadata API 169.254.169.254), Port scanning |
| Main Defense | Output Encoding, CSP, HttpOnly Cookie | Anti-CSRF Token, SameSite Cookie, Referer verification | Whitelist URL/IP, prohibit private IP range, do not return full response |
| OWASP Category | A03 Injection | A01 Broken Access Control | A10 SSRF |
- XSS attacks the user's browser (Client-side); CSRF borrows the user's authentication state to operate; SSRF attacks the server side.
- CSRF does not require injecting any scripts, only requires the user to visit a malicious page while logged in.
- In practice, XSS and CSRF are often used together: inject a script that automatically sends CSRF requests via XSS.
C# Defense XSS: Output Encoding and Input Validation
Incorrect Demonstration: Directly embedding user input into HTML
// ❌ Dangerous: Directly concatenating user input, attacker can inject <script>alert('XSS')</script>
app.MapGet("/greet", (string name) =>
Results.Content($"<h1>Hello, {name}</h1>", "text/html"));Correct Practice: Use HtmlEncoder for Output Encoding
using System.Text.Encodings.Web;
app.MapGet("/greet", (string name) => {
string safeName = HtmlEncoder.Default.Encode(name);
return Results.Content($"<h1>Hello, {safeName}</h1>", "text/html");
});
// Input <script>alert('XSS')</script>
// Output <script>alert('XSS')</script> (browser displays as plain text)ASP.NET Core Razor Automatic Encoding
When outputting variables using @ syntax in Razor views, it automatically performs HTML Encoding by default:
// @Model.Name in Razor is automatically encoded, safe
<p>@Model.Name</p>
// ❌ Dangerous: @Html.Raw() bypasses automatic encoding, disable unless content is confirmed safe
<p>@Html.Raw(Model.Name)</p>CSP (Content Security Policy) Header Configuration
// Program.cs — ASP.NET Core Middleware setting CSP Header
app.Use(async (context, next) => {
context.Response.Headers.Append(
"Content-Security-Policy",
"default-src 'self'; "
+ "script-src 'self'; "
+ "style-src 'self' 'unsafe-inline'; "
+ "img-src 'self' data:; "
+ "font-src 'self'; "
+ "connect-src 'self'; "
+ "frame-ancestors 'none'; "
+ "base-uri 'self'; "
+ "form-action 'self'"
);
// Also set other security headers
context.Response.Headers.Append("X-Content-Type-Options", "nosniff");
context.Response.Headers.Append("X-Frame-Options", "DENY");
context.Response.Headers.Append("Referrer-Policy", "strict-origin-when-cross-origin");
await next();
});C# CSRF Token Verification Concept
ASP.NET Core has a built-in Anti-Forgery Token mechanism, defending against CSRF through a double-token pattern:
MVC Controller
// Form automatically generates hidden field __RequestVerificationToken
// Razor View:
// <form asp-action="Transfer" method="post">
// @Html.AntiForgeryToken()
// ...
// </form>
[HttpPost]
[ValidateAntiForgeryToken] // Verify Token, return 400 if mismatch
public IActionResult Transfer(TransferRequest request) {
// Execute transfer logic
return RedirectToAction("Success");
}Minimal API / SPA Scenario (Manual Token Acquisition)
using Microsoft.AspNetCore.Antiforgery;
WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddAntiforgery(options => {
options.HeaderName = "X-XSRF-TOKEN"; // SPA sends back Token via this Header
});
WebApplication app = builder.Build();
app.MapGet("/antiforgery/token", (IAntiforgery antiforgery, HttpContext context) => {
AntiforgeryTokenSet tokens = antiforgery.GetAndStoreTokens(context);
// Write Request Token to Cookie, frontend JS reads and puts into Header
context.Response.Cookies.Append("XSRF-TOKEN", tokens.RequestToken!,
new CookieOptions { HttpOnly = false, SameSite = SameSiteMode.Strict });
return Results.Ok();
});
app.MapPost("/api/transfer", async (IAntiforgery antiforgery, HttpContext context) => {
await antiforgery.ValidateRequestAsync(context); // Throws AntiforgeryValidationException if verification fails
// Execute business logic
return Results.Ok();
});- Principle: Server generates a random Token, embedded in form hidden field or Cookie. Upon submission, server compares whether the Cookie Token and form/Header Token are consistent. Cross-site requests from attackers cannot read the target website's Cookie content (Same-Origin Policy), so they cannot provide the correct Token.
- Paired with
SameSite=StrictorSameSite=LaxCookie attributes, it can further prevent browsers from automatically attaching Cookies in cross-site requests.
OWASP Top 10 2021 Comparison Table
| Rank | Category | Key Description |
|---|---|---|
| A01 | Broken Access Control | Rose from A05 in 2017 to #1. Users can access unauthorized functions or data (e.g., manually changing URL to access others' data). |
| A02 | Cryptographic Failures | Renamed from "Sensitive Data Exposure" in 2017, now focuses on root causes. Covers insufficient transmission encryption, weak hashing, improper key management. |
| A03 | Injection | Includes SQL Injection, OS Command Injection, LDAP Injection, etc. Dropped from A01 in 2017 to #3. |
| A04 | Insecure Design | 2021 New Addition. Emphasizes architectural flaws in the design phase, not implementation vulnerabilities, representing that security must intervene early in the SDLC. |
| A05 | Security Misconfiguration | Default accounts/passwords not changed, unnecessary functions/services enabled, incorrect permission settings, etc. |
| A06 | Vulnerable and Outdated Components | Using third-party packages or frameworks with known vulnerabilities, corresponding to supply chain security issues. |
| A07 | Identification and Authentication Failures | Renamed from "Broken Authentication" in 2017 to cover broader identity identification flaws. |
| A08 | Software and Data Integrity Failures | 2021 New Addition. Includes CI/CD supply chain attacks, unverified update mechanisms (e.g., hijacked during plugin auto-update). |
| A09 | Security Logging and Monitoring Failures | Logs insufficient or unmonitored, leading to inability to detect or trace after an attack occurs. |
| A10 | Server-Side Request Forgery (SSRF) | 2021 New Addition. Server is tricked into issuing requests to internal resources (e.g., cloud metadata endpoints), potentially leaking cloud keys. |
2021 Version Main Changes
- A01 became "Broken Access Control" (previously "Injection").
- Three new items added: A04 Insecure Design, A08 Integrity Failures, A10 SSRF.
- "Injection" dropped from #1 to #3, but remains one of the most common attack techniques.
Common Confusion: SQL Injection / Command Injection
| Comparison Aspect | SQL Injection | Command Injection (OS Command Injection) |
|---|---|---|
| Injection Target | SQL database query statement | Operating system Shell command |
| Attack Vector | SQL fragments embedded in form fields, URL parameters, HTTP Headers | Parameters concatenated to cmd.exe /c, /bin/sh -c, etc., Shell calls |
| Harm | Data leakage, data tampering, authentication bypass, even executing system commands via xp_cmdshell | Execute arbitrary commands directly with server privileges (RCE) |
| Root Cause | String concatenation of SQL instead of parameterized queries | Passing user input directly into Shell for execution |
| Main Defense | Parameterized Query, ORM, least-privileged database account | Avoid calling Shell; use whitelist to validate input when necessary, do not concatenate command strings |
- SQL Injection variants: Blind SQL Injection (no direct echo, inferred via Boolean conditions or time delays), Second-Order SQL Injection (malicious input stored first and triggered in subsequent queries).
- Command Injection high-risk APIs in C#:
Process.Start()paired withcmd /cor/bin/sh -c, and parameters come from user input.
Supply Chain Attack
Corresponds to OWASP A06 (Vulnerable and Outdated Components) and A08 (Software and Data Integrity Failures). Attackers do not attack the target directly, but infiltrate its dependent upstream components.
Actual Cases
| Case | Year | Technique | Impact |
|---|---|---|---|
| SolarWinds (SUNBURST) | 2020 | Attacker infiltrated SolarWinds build server, implanted backdoor in Orion software update | US government agencies and thousands of enterprises affected |
| Codecov | 2021 | Bash Uploader script tampered with, stealing environment variables and keys in CI/CD environment | Open source projects and enterprise CI environments using Codecov |
| Log4Shell (CVE-2021-44228) | 2021 | JNDI Lookup function in Apache Log4j 2 has RCE vulnerability | Hundreds of millions of Java applications worldwide affected |
| event-stream (npm) | 2018 | After package ownership transferred, new maintainer injected malicious code stealing cryptocurrency wallets | npm package with millions of downloads |
| 3CX | 2023 | Supply chain attack chain: first infiltrated upstream vendor Trading Technologies, then infected 3CX desktop application via its software | Hundreds of thousands of enterprise users |
Countermeasures
Use SCA (Software Composition Analysis) tools to scan for vulnerabilities in dependent packages (e.g.,
dotnet list package --vulnerable, Snyk, OWASP Dependency-Check).Add package integrity verification to CI/CD Pipeline (e.g., NuGet signature verification, npm
package-lock.jsonhash comparison).Adopt SBOM (Software Bill of Materials) to track versions of all dependent components.
Establish internal package mirror repositories (e.g., Artifactory, Azure Artifacts) to avoid pulling directly from public Registries.
Principle of Least Privilege: CI/CD environment Secrets authorized only to necessary Pipeline stages.
SRI (Subresource Integrity): When referencing JavaScript libraries or CSS from external CDNs, add
integrity="sha384-..."attribute to<script>/<link>tags (hash value pre-calculated and hardcoded by the developer). The browser recalculates the hash after download; if it does not match, execution is refused, perfectly preventing scenarios where CDN suppliers are breached or tampered with and malicious scripts are implanted.html<script src="https://cdn.example.com/jquery.min.js" integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC" crossorigin="anonymous"></script>
BOLA / IDOR (API Security)
BOLA (Broken Object Level Authorization) is the #1 item in OWASP API Security Top 10, and is also a concrete manifestation of OWASP Top 10 A01 (Broken Access Control) in the API scenario.
IDOR (Insecure Direct Object Reference) is the most common form of BOLA:
- Example: User obtains their own order via
GET /api/orders/1001, changing1001to1002allows accessing others' orders. - Root cause: Backend only verifies whether the user is logged in, not whether the user is authorized to access the specific resource.
Defense measures:
- Backend forcibly checks "whether this user is the owner of the resource" for every request.
- Use unpredictable resource identifiers (e.g., UUID) instead of auto-incrementing IDs (reduces possibility of guessing, but not a fundamental solution).
- Implement access control policies at the API Gateway layer.
API Security (OWASP API Security Top 10)
OWASP API Security Top 10 (2023 version) lists the most common API security risks:
| Rank | Risk Name | Description |
|---|---|---|
| API1 | Broken Object Level Authorization (BOLA) | Not verifying if user is authorized to access specific objects (e.g., modifying ID in /api/users/123 allows accessing others' data) |
| API2 | Broken Authentication | Authentication mechanism flaws (weak Token, missing Rate Limiting, Token not expired) |
| API3 | Broken Object Property Level Authorization | Returning too many attributes or allowing modification of attributes that should not be modified (Mass Assignment) |
| API4 | Unrestricted Resource Consumption | Not limiting request frequency, return size, or batch quantity of API, can be exploited for DoS attacks |
| API5 | Broken Function Level Authorization | Not verifying if user is authorized to call specific API endpoints (e.g., general user accessing administrator API) |
| API6 | Unrestricted Access to Sensitive Business Flows | Not adding protection mechanisms to sensitive business flows (e.g., bulk purchase, ticket grabbing) |
| API7 | Server Side Request Forgery (SSRF) | API accepts URL parameters and issues server-side requests directly, can be exploited to access internal services |
| API8 | Security Misconfiguration | Missing security headers, excessive error message disclosure, unnecessary HTTP methods enabled |
| API9 | Improper Inventory Management | Not tracking all API versions and endpoints, old APIs not decommissioned become attack entry points |
| API10 | Unsafe Consumption of APIs | Trusting data returned by third-party APIs without verification, potentially introducing malicious content |
- BOLA (API1) is the most common API vulnerability, essentially lacking object-level authorization checks.
- Difference between API Security and Web Application Security: APIs usually have no UI, attackers operate HTTP requests directly, traditional WAF rules (targeted at HTML/Form) may not be able to effectively defend.
- Defense Recommendations: API Gateway paired with Rate Limiting, OAuth 2.0 + JWT, input validation, least return principle (return only necessary fields).
GraphQL API Specific Security Issues
REST APIs usually map each endpoint to a specific resource, making it easy for attackers to enumerate; GraphQL has only a single endpoint, but the built-in Introspection Query mechanism allows anyone to query the complete Schema structure:
{ __schema { types { name fields { name } } } }In the reconnaissance phase of penetration testing, attackers execute Introspection Query to obtain all available queries (Query), mutations (Mutation), type definitions, and field names, equivalent to automatically generating an API attack surface list, significantly shortening the exploration time for subsequent BOLA / injection attacks.
Defense:
- Disable Introspection in production (enable only in development).
- Even if Introspection is disabled, implement field-level access control to prevent unauthorized access to sensitive fields.
- Enable query complexity limits (Query Depth Limiting / Cost Analysis) to prevent DoS caused by recursive queries.
Common Software Vulnerability Exploitation Techniques Comparison Table
| Technique | Category | Principle | Countermeasure |
|---|---|---|---|
| Buffer Overflow | Memory Safety | Write data exceeding buffer boundaries, overwriting adjacent memory (including return address), thereby controlling execution flow. | Boundary checking, Stack Canary, ASLR, DEP/NX. |
| Use-After-Free (UAF) | Memory Safety | Memory is not cleared after release, attacker allocates new object to occupy the same address, manipulating the new object through the old pointer. | Set pointer to null after release, use smart pointers, memory-safe languages (e.g., Rust). |
| Heap Spray | Memory Safety | Fill Heap with malicious Shellcode (shellcode, attacker's malicious machine code) and NOP Sled (NOP slide, continuous no-operation instructions, guiding execution flow into Shellcode) to increase the probability of jumping to malicious code, often combined with UAF or overflow. | ASLR, limit heap memory allocation upper limit, memory isolation. |
| Integer Overflow | Memory Safety | Integer operation wraps around after exceeding type range, leading to length calculation errors, triggering buffer overflow or logic bypass. | Use safe integer libraries, check range before explicit type conversion; C# defaults to unchecked (silent wrap), can use checked block or /checked compiler option to throw OverflowException. |
| Format String Attack | Injection | Pass user input directly into printf() and other formatting functions (e.g., printf(user_input)), attacker can read Stack memory or write to arbitrary addresses using %n. | Never use external input as format string, fixed use of printf("%s", user_input); enable compiler warnings. |
| XML External Entity Injection (XXE) | Injection | XML parser processes external entity declarations (<!ENTITY xxe SYSTEM "file:///etc/passwd">), reading local files or initiating SSRF requests. | Disable external entity parsing (FEATURE_EXTERNAL_GENERAL_ENTITIES = false), use parsers that do not support DTD. |
| Server-Side Template Injection (SSTI) | Injection | User input embedded directly into template engine (e.g., Jinja2, Thymeleaf) rendering, triggering template syntax to execute arbitrary code (RCE, Remote Code Execution). | Input must not be directly concatenated into template strings; template engine sandboxing; distinguish between data and template context. |
| Insecure Deserialization | Injection | When deserializing untrusted data, attacker can manipulate Object Graph to trigger specific constructors or callbacks (e.g., PHP's __wakeup, C#'s IDeserializationCallback), leading to RCE or privilege escalation. C#'s BinaryFormatter is a typical high-risk API (.NET 5 marked deprecated, .NET 9 removed). | Do not deserialize data from untrusted sources; C# switch to System.Text.Json or XmlSerializer with type whitelist; verify signature of serialized data. |
| Race Condition / TOCTOU (Time-of-Check to Time-of-Use) | Logic & Competition | Time gap between "check" and "use," attacker replaces resources (e.g., files, symbolic links) during this period, invalidating check results. | Use atomic operations; lock after acquiring resource (File Locking); avoid relying on intermediate states that can be modified externally. |
| Prototype Pollution | Logic & Competition | In JavaScript, manipulate __proto__ or constructor.prototype to pollute the prototype shared by all objects, thereby injecting malicious attributes affecting global behavior. | Create objects without prototypes using Object.create(null); freeze prototypes (Object.freeze(Object.prototype)); input key blacklist filtering (__proto__, constructor). |
| Side-channel Attack | Observation Inference | Do not attack the algorithm directly, but measure physical characteristics (timing, power consumption, electromagnetic radiation, cache hit rate) during system execution to restore secret data. Typical case: Spectre / Meltdown exploits cache timing differences in CPU speculative execution to read cross-process memory. | Constant-time algorithms, Retpoline/IBRS, KPTI. |
| Zero-day Exploit | Other | Exploit unknown vulnerabilities not yet public or patched by vendors, because there are no available patches, defense side cannot rely on signature detection. | Defense in Depth; behavioral detection (EDR / XDR); Principle of Least Privilege; network segmentation to limit lateral movement. |
| Fileless Malware | Other | Malicious code does not land (not written to disk), executes directly in memory (e.g., via PowerShell, WMI, Living-off-the-Land Binaries). Traditional antivirus relies on file scanning, making it difficult to detect. | AMSI (Antimalware Scan Interface): Script engines (PowerShell, VBScript, WMI) pass content to antivirus engine before execution, intercepting decrypted plaintext scripts in memory, the most effective system-level defense against LOLBins; Script Block Logging; behavioral EDR; restrict PowerShell execution policy. |
| Return-Oriented Programming (ROP) | Memory Safety | Because DEP/NX prevents executing Shellcode, attacker chains existing Gadgets (short code sequences ending in ret), combining them into arbitrary logic, bypassing non-executable restrictions. | CFI, Shadow Stack, ASLR make Gadget addresses difficult to predict and control flow impossible to hijack. |
| DLL Side-Loading | Supply Chain / Hijacking | Exploit Windows DLL search order, place malicious DLL with the same name in the program's directory, causing the program to load the malicious version first upon startup. Often paired with legitimate and digitally signed programs (e.g., antivirus) to evade detection. | Enable SafeDllSearchMode (default); load DLLs using absolute paths in code; verify DLL digital signature; application whitelist control. |
Buffer Overflow: Stack Memory Layout
When a function is called, the Stack grows from high address to low address. The complete layout with Stack Canary defense is as follows:
High Address
┌──────────────────────────────────┐
│ Return Address │ ← Attack target: overwrite to control execution flow
├──────────────────────────────────┤
│ Saved EBP (Caller Base Pointer) │ ← Overwritten along the way
├──────────────────────────────────┤
│ ★ Stack Canary │ ← Defense: random value, verified before return
├──────────────────────────────────┤
│ Local Variables / Buffer │ ← Write start point, overflow direction ↑
└──────────────────────────────────┘
Low Address (Stack grows downward)State after overflow:
High Address
┌──────────────────────────────────┐
│ 0xdeadbeef (Attacker controlled)│ ← Return address tampered!
├──────────────────────────────────┤
│ AAAAAAAA │ ← Saved EBP corrupted
├──────────────────────────────────┤
│ AAAAAAAA │ ← ★ Canary overwritten → Detected!
├──────────────────────────────────┤
│ AAAA... (Long input) │ ← Buffer + overflow
└──────────────────────────────────┘
Low AddressNormal vs Overflow Comparison
| Stack Position (High → Low) | Normal State | After Overflow |
|---|---|---|
| Return Address | 0x00401234 (Legal address) | 0xdeadbeef (Attacker controlled) |
| Saved EBP | Caller Base Pointer | AAAAAAAA (Corrupted) |
| Stack Canary | 0x7f3a9c01 (Random value) | AAAAAAAA (Overwritten → Detected!) |
| Buffer (16 bytes) | Normal data | AAAA... (Long input) |
Before the function ret instruction executes, the program verifies if the Canary is intact. If overwritten → immediately terminate the program (__stack_chk_fail), preventing the attacker from controlling the return address.
C# Situation
C# code managed by CLR has automatic boundary checking and cannot suffer from traditional Buffer Overflow. However, when using unsafe blocks to operate on raw pointers, boundary checking is bypassed:
unsafe void Vulnerable(string input) {
byte* buffer = stackalloc byte[16];
fixed (char* p = input) {
for (int i = 0; i < input.Length; i++)
buffer[i] = (byte)p[i]; // Overwrites Stack when input > 16 characters!
}
}Avoid using unsafe; when high-performance buffer operations are needed, switch to Span<T> or Memory<T> (still has boundary checking).
TOCTOU: Time Gap Between Check and Use
Root cause: Check (①) and Use (③) are not atomic operations, there is a time window that can be exploited.
C# Example
// Vulnerable: Time gap between File.Exists and File.WriteAllText
if (File.Exists(path)) {
// Attacker might replace the symbolic link pointed to by path here
File.WriteAllText(path, data);
}
// Safer: FileMode.CreateNew performs atomic judgment at the kernel level "create only if it doesn't exist"
// Throws IOException if it already exists (including symbolic link target)
using FileStream fs = new(path, FileMode.CreateNew, FileAccess.Write);
using StreamWriter writer = new(fs);
writer.Write(data);Supplement: In Unix environments, it must be paired with the O_NOFOLLOW flag (called via P/Invoke in C#) to refuse following symbolic links; Windows symbolic links require administrator privileges, risk is relatively low.
Side-channel Attack: Why does "observation" allow attack?
Core premise: Algorithms have subtle differences in execution behavior when processing different data, and these differences leak in measurable physical characteristics.
Example 1 — Timing Attack: Character Comparison
Common incorrect password comparison implementation (C#):
// Vulnerable: Early return leaks timing information
bool ComparePassword(byte[] stored, byte[] input) {
if (stored.Length != input.Length) {
return false;
}
for (int i = 0; i < stored.Length; i++) {
if (stored[i] != input[i]) {
return false; // Return early upon first mismatch
}
}
return true;
}- Input
"bXXX"vs correct password"abcd": First character doesn't match, returns immediately → shortest time. - Input
"aXXX": First character matches, continues to compare the second before returning → slightly longer time.
Attackers can brute-force character by character: whichever input takes the longest time means that character was guessed correctly. No need to guess everything at once.
Why network latency doesn't block the attack: Law of Large Numbers
Network latency is random noise (normal distribution, random fluctuations up and down); CPU execution time difference is a fixed bias (has a fixed factor of "how many bytes match"). Attackers repeat the same request 10,000 to 100,000 times to the same target and take the average; when the sample size is large enough, random latency cancels out, and the underlying fixed time difference emerges from the noise. Even over the public network, it is feasible; all that is needed is a sufficient number of samples.
The correct approach is to use constant-time comparison, .NET provides CryptographicOperations.FixedTimeEquals (.NET Core 2.1+):
using System.Security.Cryptography;
bool ComparePasswordSafe(byte[] stored, byte[] input) {
// Regardless of how many characters match, execution time is constant, no information leaked
return CryptographicOperations.FixedTimeEquals(stored, input);
}Example 2 — Cache Timing Attack: Spectre
CPU, to improve performance, will "speculatively execute" subsequent instructions before the condition judgment result is out:
// Even if x is out of bounds, CPU may still speculatively execute the following code:
if (x < array.length) {
y = secret_array[x]; // ① Read secret data into y
temp = probe_array[y * 4096]; // ② Access different cache lines based on y
}
// Speculation incorrect → result discarded, but cache state retained!The attacker then times each position of probe_array:
- Fast access (Cache Hit) → that cache line was accessed by ② → can infer the value of y → can restore
secret_array[x].
This allows reading cross-process memory (including OS Kernel data) that the program is theoretically unauthorized to access.
Example 3 — Cloud Co-residency Attack
The nature of cloud provider (AWS, GCP, Azure) "resource sharing pools" allows multiple tenants to share the same physical server. Attackers only need to spend a few dollars to spin up a VM, and there is a probability of being scheduled to the same physical machine as the target server, thereby sharing L3 Cache and memory buses.
At this point, the attacker executes a cache timing attack locally, completely bypassing the noise problem of network latency, observing the target's cache access patterns with nanosecond precision to infer keys or secret data. Spectre and Meltdown's harm is particularly severe in cloud environments precisely because the combination of "shared hardware + speculative execution" provides ideal attack conditions.
Countermeasures
| Defense Measure | Targeted Leakage Channel | Description |
|---|---|---|
| Constant-time algorithms | Timing differences | Make the execution time of all inputs exactly the same, fundamentally eliminating timing signals. Preferred solution for cryptographic operations (e.g., CryptographicOperations.FixedTimeEquals). |
| Jitter / Timing Noise | Timing differences | Add random waiting time before and after cryptographic operations, so that the attacker's statistical average requires more samples to separate signals. Belongs to a secondary defense layer, cannot replace constant-time algorithms, but can increase attack costs; effect on local side-channel attacks (nanosecond level) is limited. |
| Retpoline / IBRS | Spectre branch speculation | — |
| KPTI | Meltdown Kernel cache leakage | — |
BinaryFormatter Deserialization RCE Example (C#)
BinaryFormatter deserialization reconstructs the complete object graph and triggers callbacks such as IDeserializationCallback.OnDeserialization(). The following is a simplified illustration; actual exploitation tools (e.g., ysoserial.net) use .NET Framework built-in classes to string together a Gadget Chain, and the server side does not need to have any custom classes at all.
using System.Diagnostics;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
// Attacker side: Create malicious object and serialize to bytes
BinaryFormatter formatter = new();
using MemoryStream ms = new();
formatter.Serialize(ms, new Payload());
byte[] maliciousBytes = ms.ToArray(); // Send this string of bytes to the victim side
// Victim side: Call Deserialize directly on external input (Dangerous!)
ms.Position = 0;
formatter.Deserialize(ms); // Triggers OnDeserialization
[Serializable]
class Payload : IDeserializationCallback {
public void OnDeserialization(object? sender) =>
Process.Start("calc.exe"); // Automatically executes after deserialization completes
}Attack path in Web Forms era: ViewState is serialized and sent to Client side. If MAC verification is disabled, or MachineKey is leaked, the attacker can submit a malicious ViewState → server deserializes → triggers Gadget Chain.
Clarification of Common Techniques
- Side-channel Attack: Does not attack the algorithm itself, but observes physical characteristics like timing, power, cache during execution. Spectre/Meltdown is cache timing leakage of CPU speculative execution.
- Zero-day vs Fileless: Zero-day means "vulnerability not public"; Fileless means "attack does not land on disk," both can exist simultaneously.
- For the principles and applicable attack techniques of general defense technologies (ASLR, DEP/NX, Stack Canary, CFI, etc.), see Common Security Defense Techniques Comparison Table.
Social Engineering and Identity Spoofing Attacks
Social Engineering Attack Type Comparison Table
| Type | Chinese Name | Target Scope | Technique Characteristics |
|---|---|---|---|
| Phishing | Phishing | Large number of unspecified users | Send fake emails or pages impersonating well-known institutions (banks, Google) in bulk. |
| Spear Phishing | Spear Phishing | Specific individuals or organizations | Pre-collect target data, create highly personalized fake emails, difficult to identify. |
| Whaling | Whaling | High-level executives (CEO/CFO) | A subset of Spear Phishing, the goal is someone who can authorize large transfers or disclose secrets. |
| Vishing | Voice Phishing | Phone users | Impersonate customer service, government agencies, request account or verification code via phone. |
| Smishing | SMS Phishing | Mobile users | Send malicious links via SMS, often disguised as package notifications or winning notifications. |
| Pretexting | Pretexting | Specific targets | Fabricate reasonable scenarios (e.g., claiming to be IT personnel for remote assistance), deceive targets into cooperating to provide information or execute operations. |
| Baiting | Baiting | Unspecified targets | Exploit human curiosity, leave USB flash drives implanted with malicious programs in public places. |
Social Engineering Classification Logic
- Breadth decreases: Phishing → Spear Phishing → Whaling (the smaller the target, the higher the precision).
- Media distinction:
- Email = Phishing / Spear Phishing / Whaling
- Phone = Vishing
- SMS = Smishing
- Physical bait = Baiting
Password Attack Type Comparison Table
| Attack Type | English | Technique Description | Speed vs Stealth |
|---|---|---|---|
| Brute Force | Brute Force | Try all possible password combinations for a single account (e.g., a, b, ..., aa, ab, ...) | Slowest, easiest to trigger account lockout |
| Dictionary Attack | Dictionary Attack | Try one by one using a pre-organized list of common passwords (e.g., rockyou.txt) | Faster than brute force, but limited by dictionary quality |
| Credential Stuffing | Credential Stuffing | Use leaked account-password pairs (from data breaches on other websites) to try logging into the target website in bulk | Exploits users' habit of password reuse across sites, success rate approx. 0.1%–2% |
| Password Spraying | Password Spraying | Try a few common passwords (e.g., P@ssw0rd, Company2024!) against many accounts | Only 1–2 attempts per account, deliberately avoids account lockout thresholds |
| Rainbow Table Attack | Rainbow Table Attack | Pre-calculate hash values of many passwords to build a lookup table, compare with target hash to reverse-lookup plaintext | Space-time trade-off, salting (Salt) makes rainbow tables ineffective |
Credential Stuffing vs Password Spraying vs Brute Force
- Brute Force: One account × all passwords → easy to trigger lockout.
- Password Spraying: All accounts × few passwords → low-frequency probing, evades lockout.
- Credential Stuffing: Known account-password pairs × multiple websites → exploits password reuse.
- Defense commonalities: MFA (Multi-Factor Authentication), abnormal login detection, rate limiting.
- Defense differences: Brute Force can be handled by account lockout; Password Spraying requires global login failure trend analysis; Credential Stuffing requires detecting low-frequency attempts from many different source IPs.
Hydra / Medusa Brute Force Command Examples
Hydra and Medusa are open-source network login brute force tools supporting multiple protocols.
Hydra Example
# SSH Brute Force (single user + password dictionary)
hydra -l admin -P /usr/share/wordlists/rockyou.txt ssh://192.168.1.100
# HTTP POST form brute force
# /login is login path, user=^USER^&pass=^PASS^ are form parameters, "Invalid" is failure response keyword
hydra -l admin -P passwords.txt 192.168.1.100 http-post-form \
"/login:user=^USER^&pass=^PASS^:Invalid"
# Password Spraying (multiple users + few passwords)
hydra -L users.txt -p 'Company2024!' ssh://192.168.1.100
# FTP Brute Force (user list + password list)
hydra -L users.txt -P passwords.txt ftp://192.168.1.100
# Limit concurrency and wait time (avoid triggering lockout or detection)
hydra -l admin -P passwords.txt -t 4 -W 3 ssh://192.168.1.100Medusa Example
# SSH Brute Force (syntax similar to Hydra)
medusa -h 192.168.1.100 -u admin -P passwords.txt -M ssh
# Multi-host batch test
medusa -H hosts.txt -U users.txt -P passwords.txt -M ssh-l/-u: Single username;-L/-U: Username list file.-p/-P: Single password / password list file.-t: Hydra concurrent threads (default 16);-W: Wait seconds between attempts.- Limited to use in authorized penetration testing environments.
Watering Hole Attack
Attackers do not attack the target directly, but pre-infiltrate websites frequently visited by the target, implanting malicious code (e.g., browser vulnerability exploits), waiting for the target to visit.
- Origin of name: Concept of lions waiting for prey by a water hole.
- Characteristics: Indirect attack, combines Reconnaissance (recon target's common websites) and Exploitation (exploit browser/plugin vulnerabilities).
- Typical scenario: Attacker infiltrates industry forums or supplier websites, implants JavaScript vulnerability exploitation code, targeting visitors from specific industries.
- Relationship with Drive-by Download: The technical means of Watering Hole Attack is usually Drive-by Download.
Drive-by Download
Users only browse the web (no need to click or download anything), and vulnerabilities in the browser or its plugins are automatically exploited, with malicious programs installed silently in the background.
- No user interaction required: Unlike phishing attacks, users do not need to click any links or confirm dialog boxes.
- Common vulnerability exploitation targets: Browser itself, Flash Player (deprecated), Java Applet (deprecated), PDF Reader plugins.
- Defense: Keep browsers and plugins updated, remove unnecessary plugins, enable browser sandboxing, deploy Web gateway filtering.
Typosquatting
Register domain names with spelling similar to well-known domains, using users' typing errors to direct traffic to malicious websites.
| Spoofing Technique | Example (Target: example.com) |
|---|---|
| Missing one character | examle.com |
| Extra one character | examplle.com |
| Adjacent key replacement | ezample.com (z and x are adjacent) |
| Homophone/Visual confusion | examp1e.com (number 1 replaces letter l) |
| TLD replacement | example.org, example.co |
- Package Typosquatting: Upload malicious packages with names similar to well-known packages on package management platforms like npm, PyPI (e.g.,
coIorsspoofingcolors), a type of supply chain attack. - Defense: DNS monitoring, register common spelling error domains as protection, access important websites via browser bookmarks, use lock files and hash verification for package management.
Business Email Compromise (BEC)
Attackers infiltrate or spoof enterprise executive/partner email accounts to deceive employees into executing transfers, leaking confidential data, or changing payment information.
- Difference from Whaling: Whaling is "fishing" for executives; BEC is "impersonating" executives to send emails to subordinates.
- Common techniques:
- CEO fraud: Impersonate CEO to send emails to the finance department requesting emergency transfers.
- Supplier invoice fraud: Impersonate supplier to notify of bank account changes.
- Account infiltration: Directly infiltrate employee mailboxes, send fraudulent emails from real accounts (harder to identify).
- Defense: Secondary confirmation via phone or in-person for large transfers, enable email verification mechanisms (SPF, DKIM, DMARC), employee security awareness training.
IoT and Embedded System Attacks
IoT Attack Type Comparison Table
| Chinese Name | English Name | Core Behavior | Characteristics and Identification Focus |
|---|---|---|---|
| Black Hole Attack | Black Hole Attack | Malicious node claims to be the best route, but discards all packets passing through | Traffic enters and disappears, no response, like a "black hole" swallowing packets |
| Wormhole Attack | Wormhole Attack | Two malicious nodes establish a covert tunnel in the network, making remote nodes mistakenly believe they are neighbors | Distorts routing topology, interferes with routing protocols, difficult to detect |
| Sinkhole Attack | Sinkhole Attack | Malicious node broadcasts false "best routes," attracting massive traffic to flow to itself before deciding whether to discard or tamper | Similar to Black Hole, but decides how to handle after "attracting," wider impact scope |
| Evil Twin Attack | Evil Twin Attack | Establishes a fake hotspot with the same SSID (Service Set Identifier) as a legitimate AP (Access Point), deceiving devices into connecting | Common attack for wireless networks and IoT devices, can perform man-in-the-middle attacks or credential theft |
Routing Attack Comparison
- Black Hole = Swallows packets and discards them directly, simplest.
- Sinkhole = Swallows packets and can handle them selectively, attacker has stronger control, wider impact.
- Wormhole = Does not discard packets, but distorts routing topology, making two remote malicious nodes look like neighbors, more covert.
Zigbee / IoT Protocol Security
| Protocol | Main Application | Known Risks |
|---|---|---|
| Zigbee | Smart home sensors, light control, industrial sensing | Keys are transmitted in plaintext in the air when devices join the network, can be intercepted; many manufacturers use industry-known shared keys by default, equivalent to public |
| BLE (Bluetooth Low Energy) | Wearable devices, medical equipment, smart locks | Old pairing processes (BLE 4.0/4.1) easily eavesdropped; KNOB attack can force both ends to downgrade encryption strength to extremely low |
| Z-Wave | Door locks, curtains, appliance control | Old security framework (S0) uses fixed keys, easily replayed or cracked; new version (S2) has improved, but old devices cannot be updated and still have risks |
Common Attack Techniques:
- Key Sniffing: Intercept keys transmitted in the air when Zigbee devices are initially paired.
- Replay Attack: Record legitimate control packets (e.g., "unlock" command), retransmit later to trigger the same action.
- Firmware Downgrade Attack: Force devices to install old firmware, returning them to a state containing known vulnerabilities.
Defense:
- Install Code (Factory Pre-loaded Key): Burn a unique key into the device at the factory, no need to transmit in the air when joining the network, avoiding interception.
- Disable Default Trust Center Key: Default values are known to the industry; disabling them prevents direct exploitation.
- Firmware Signature Verification: Ensure devices only accept legitimate firmware signed by the manufacturer, blocking downgrade attacks.
Host Infiltration and Lateral Movement
Container Escape and 4C Security Model
Container Escape attack paths:
| Path | Description |
|---|---|
| Privileged Container | Start container with full host privileges (--privileged), equivalent to allowing the container to directly manipulate host hardware and system resources, boundary effectively does not exist |
| Kernel Vulnerability | Containers share the same OS Kernel with the host; kernel vulnerabilities allow programs inside the container to break isolation and gain host control |
| Mount Host File System | Mount host sensitive directories or Docker control interface (Docker Socket) into the container, equivalent to giving the container a host administrator entry |
| Dangerous System Capabilities | Retain high-risk system capabilities not needed by the container (e.g., arbitrary disk mounting, modifying network settings), attackers exploit these capabilities to break isolation |
Kubernetes 4C Security Model (from outside to inside):
| Level | English | Chinese | Security Focus |
|---|---|---|---|
| 1 | Cloud | Cloud | Cloud account least privilege (IAM, Identity and Access Management), network security groups limit external exposure |
| 2 | Cluster | Cluster | Role-Based Access Control (RBAC), network policies limit Pod-to-Pod communication, API Server forced authentication |
| 3 | Container | Container | Execute as non-administrator, mount read-only root directory, remove unnecessary system capabilities, regular scanning of image vulnerabilities |
| 4 | Code | Code | Secure coding, dependency package vulnerability scanning (SCA, Software Composition Analysis), secrets not written into images |
- Defense principle: Containers should execute with least privilege, non-administrator identity, read-only root directory, and retain only necessary system capabilities.
- Container isolation relies on two mechanisms of the OS: Namespaces (isolates the view of processes, network, file system) + cgroups (limits CPU / memory usage).
- Container and Multi-tenancy Security share "shared infrastructure" risks, defense thinking is similar.
Process Injection and Process Hollowing
Both are techniques for executing malicious code in the memory space of a legitimate process, bypassing whitelist defenses and disguising as normal processes.
| Technique | Principle | Characteristics | Detection Method |
|---|---|---|---|
| Process Injection | Write malicious code (Shellcode or DLL) into the memory of a running legitimate process, borrowing the legitimate process's identity to execute | Legitimate process (e.g., explorer.exe) memory contains executable segments not belonging to the original program | EDR monitors abnormal behavior of "one process performing cross-process writing to another and triggering execution" |
| Process Hollowing | Create a suspended instance of a legitimate process, clear its memory content, implant malicious code, and then resume execution | Process name looks normal, but actual code in memory does not match the content of the executable file on disk | EDR compares the actual content loaded in memory by the process with the original executable file on disk |
These two techniques work because Windows provides cross-process operation APIs, originally for legitimate purposes like debuggers, monitoring tools, and anti-malware, which attackers borrow to achieve malicious goals. Windows API is divided into two layers:
- Win32 API: Microsoft's publicly documented high-level interface, exported by
kernel32.dll,user32.dll, etc., providing capabilities like cross-process memory writing, creating threads in another process, etc. Win32 API is essentially a wrapper for NTAPI. - Native API / NTAPI: Lower-level interface, exported by
ntdll.dll, directly corresponding to Windows kernel system calls (Syscall). Most NTAPIs are not publicly documented but can still be called, and some security tools hang hooks at the Win32 layer, so attackers sometimes switch to NTAPI to bypass this layer of detection.
Process Injection Attack Process:
The attack target is a running process (e.g., explorer.exe), with the goal of making the process execute the attacker's code without its knowledge.
- Obtain target process operation permissions: Call
OpenProcess, pass in the target process's PID, and apply for a Handle from the OS. The Handle is the credential for all subsequent cross-process operations, representing "the OS allows me to operate this process." Sufficient privileges (usually administrator) are required to obtain it. - Configure space in target process memory: Call
VirtualAllocEx(Exmeans cross-process operation), configure a blank area in the target process's virtual address space, and set it toPAGE_EXECUTE_READWRITE(RWX) attribute, making this area writable and executable. The code is now in the target process's memory but not yet executed. - Write malicious code: Call
WriteProcessMemory, write malicious Shellcode directly into the memory address configured in step 2 via the Handle from step 1. - Trigger execution: Call
CreateRemoteThread(Win32) or the underlyingNtCreateThreadEx(NTAPI), create a new thread within the target process, with the start address pointing to the malicious code just written, attack complete.
Process Hollowing Attack Process:
The difference from process injection is that the attacker does not invade a process already running, but starts a legitimate process themselves and then swaps its content.
- Start legitimate process and pause immediately: Call
CreateProcesswith theCREATE_SUSPENDEDflag, starting a system process like notepad.exe or svchost.exe. The process is created but the main thread immediately enters a suspended state, not yet executing a single line of its own code. This step itself is completely legal and triggers no alerts. - Clear legitimate code (hollowing): Call NTAPI's
NtUnmapViewOfSection, unmap and clear the original executable code of this suspended process from memory. The process shell (PID, Handle, name) still exists, but the content has been cleared. - Implant malicious code: Call
VirtualAllocExto reconfigure RWX memory in the empty shell, then callWriteProcessMemoryto write the malicious Payload. - Tamper with execution entry point: Call
GetThreadContextto obtain the CPU register state of the suspended thread. At this point, the thread is stopped in the ntdll startup stub, and the entry point address is stored in the EAX (32-bit) or RCX (64-bit) register as a parameter, not the instruction pointer EIP/RIP. The attacker changes the value of EAX / RCX to the address of the malicious code, then callsSetThreadContextto write it back. - Resume process: Call
ResumeThread(Win32) orNtResumeThread(NTAPI) to lift the suspension. As soon as the process starts, it executes the attacker's malicious code, but Task Manager and most tools still show it as the legitimate notepad.exe.
Does Windows monitor these behaviors?
The Windows kernel itself does not block these API calls because they have legitimate uses, but there are several levels of restrictions:
- Access permission threshold:
OpenProcessrequires sufficient privileges. Protected processes marked as Protected Process Light (PPL) (e.g.,lsass.exe, antivirus engine processes) cannot obtain a writable Handle even by an administrator, directly blocking the injection path. - Optional protection mechanisms: Windows 10+ provides Arbitrary Code Guard (ACG), which prohibits dynamically creating or modifying executable memory pages after the process is enabled, directly blocking the
VirtualAllocExconfiguration of RWX areas. However, ACG must be enabled by the process itself, not a system default. - Security tool monitoring: Windows Defender and third-party EDR subscribe to kernel events via ETW (Event Tracing for Windows) to detect abnormal call sequences such as "configuring RWX memory across processes, then immediately writing and creating remote threads." The kernel design allows it, but it generates telemetry data available for analysis, and security tool alert logic is built on top of this.
Target processes: svchost.exe, explorer.exe, notepad.exe, etc., because security tools usually do not deeply inspect these "trusted" processes.
HTTP Request Smuggling
When there is a discrepancy in how the frontend proxy server (e.g., CDN, load balancer) and the backend server parse HTTP request boundaries, attackers can use this to "smuggle" hidden malicious requests in a single connection.
Why is there a discrepancy? HTTP has two ways to indicate request length, and the specification does not mandate which takes precedence when both exist:
- Content-Length (CL): Directly declares total length, "this package is 100 bytes total"
- Transfer-Encoding: chunked (TE): Chunked transmission, each chunk preceded by length, ending with
0, "send in batches, until empty batch"
If the frontend and backend each believe one, they will cut at different places for the same request, and the "extra part" remains in the backend buffer, mistakenly identified as the beginning of the next request.
CL.TE Attack Process (most common):
The attacker constructs a request with both CL and TE, hiding malicious content after the TE end marker:
POST / HTTP/1.1
Host: victim.com
Content-Length: 44 ← Frontend uses CL: 44 bytes total body, all forwarded to backend
Transfer-Encoding: chunked ← Backend uses TE: parsed by chunk, stops at 0
# ── Backend parsing range ─────────────────────────────────────────
0 ← 0-length chunk, backend considers request ended (TE end marker)
# ── Residual buffer: Backend stopped, following content sticks to next request ───────────
GET /admin HTTP/1.1 ← Smuggled content, pollutes next user request
Host: victim.com
# ── Frontend forwarding end (44 bytes total) ──────────────────────────────Two-step mechanism for Cookie theft:
The response is sent back to the victim rather than the attacker, so the attacker needs to borrow an endpoint that stores data (e.g., comments, search history) as a relay:
Step 1: Attacker smuggles a POST with an unfinished body, targeting the storage endpoint:
[Backend buffer residual — smuggled by attacker]
POST /post/comment HTTP/1.1
Host: victim.com
Content-Length: 999 ← Deliberately large, backend continues waiting for subsequent input
csrf=xxx&comment= ← body deliberately truncated, waiting for victim's request to stick inStep 2: The next user's normal request sticks in and is treated by the backend as part of the POST body:
POST /post/comment HTTP/1.1
...
csrf=xxx&comment=GET /home HTTP/1.1 ← Beginning of user request treated as comment content
Host: victim.com
Cookie: session=abc123 ← Cookie stored in comment togetherThe backend saves the whole segment as a comment, the attacker then sends GET /post/X to check the comment and can read the user's Cookie. The victim's browser receives a non-expected comment submission response.
TE.CL Attack Structure (Frontend uses TE, Backend uses CL, roles swapped):
POST / HTTP/1.1
Host: victim.com
Transfer-Encoding: chunked ← Frontend uses TE: reads all chunks then forwards
Content-Length: 3 ← Backend uses CL: reads only first 3 bytes of body then stops
# ── Backend parsing range (first 3 bytes of body) ──────────────────────────
1a ← chunk size header; backend reads "1a\r" totaling 3 bytes then stops
# ── Residual buffer: Backend stopped, following content sticks to next request ──────────
GET /admin HTTP/1.1 ← Smuggled content, pollutes next user request
Host: victim.com
# ── Frontend forwarding end ─────────────────────────────────────────
0 ← Frontend TE end chunk (backend already stopped reading, does not involve this segment)After the backend finishes reading the content with CL=3, the remainder in the chunk stays in the buffer, polluting the mechanism is the same as CL.TE, just the roles of frontend and backend are swapped.
TE.TE Attack Structure (Both ends see TE, but one end doesn't understand the obfuscated version):
POST / HTTP/1.1
Host: victim.com
Transfer-Encoding: chunked ← Standard TE header
Transfer-Encoding: xchunked ← Obfuscated TE header (non-standard; one end identifies, other ignores)
# ── End identifying TE ────────────────────────────────────────
0 ← This end parses to here: 0-length chunk, request ends
# ── End unable to identify TE: degrades to CL parsing ─────────────────────
# ── Residual buffer: Degradation end parses by CL, following content sticks to next request ────
GET /admin HTTP/1.1 ← Smuggled content, pollutes next request
Host: victim.comOne end fails to identify the obfuscated TE header and ignores it, equivalent to degrading to CL.TE or TE.CL scenarios, the subsequent pollution mechanism is the same.
| Attack Variant | Frontend Parsing | Backend Parsing | Effect |
|---|---|---|---|
| CL.TE | Content-Length | Transfer-Encoding | Frontend sends complete request, backend cuts off early, remainder pollutes next request |
| TE.CL | Transfer-Encoding | Content-Length | Frontend cuts off early, backend reads excess content according to total length, remainder pollutes next request |
| TE.TE | TE (parses obfuscated header) | TE (ignores obfuscated header) | Attacker deliberately writes slightly deformed TE header, causing one end to fail to identify and switch to another parsing method |
Attack Impact:
- Hijack other users' requests (steal Cookie, Session Token).
- Bypass frontend security controls (e.g., WAF, access control).
- Execute arbitrary requests on the backend.
Defense
The root cause of the vulnerability is the HTTP/1.1 parsing discrepancy in the frontend proxy → backend segment, which must be set at the proxy layer. RFC 7230 stipulates that TE takes precedence over CL when both exist; both ends strictly adhering to this eliminates the discrepancy.
Nginx (1.13.0+ patched TE parsing logic, can add the following settings)
Reject mixed requests, return 400:
# http {} block
map "$http_content_length:$http_transfer_encoding" $smuggling_risk {
# Nginx concatenates CL value and TE value in request header with a colon :
# If both have values, it matches "~.+:.+" (regex: at least one character on both sides), then assigns variable 1 (high risk).
"~.+:.+" 1;
default 0;
}
# server {} block
if ($smuggling_risk) {
return 400 "Bad Request: Multiple Length Indicators";
}Upgrade to HTTP/2 upstream (fundamental solution, backend must support simultaneously; Kestrel supports by default):
proxy_http_version 2.0;Proxy layer normalization (remove TE, backend only looks at CL):
proxy_set_header Transfer-Encoding "";HAProxy (1.9+ enabled strict header parsing by default, can add ACL to force rejection of mixed requests):
# frontend or listen block
http-request deny if { req.hdr_cnt(content-length) gt 0 } { req.hdr_cnt(transfer-encoding) gt 0 }ASP.NET Core (Kestrel)
Kestrel itself complies with RFC 7230 and requires no additional settings. If you need to leave warning logs at the Middleware layer as defense in depth:
app.Use(async (context, next) => {
var headers = context.Request.Headers;
if (headers.ContainsKey("Content-Length") && headers.ContainsKey("Transfer-Encoding")) {
context.Response.StatusCode = 400;
await context.Response.WriteAsync("Bad Request: Invalid Header Combination");
return;
}
await next();
});DNS Tunneling
Uses the DNS protocol (usually allowed through firewalls on UDP Port 53) as a covert data transmission channel, encoding non-DNS data in DNS queries and responses.
| Aspect | Description |
|---|---|
| Principle | Attacker sets up their own DNS server, victim host encodes stolen data into subdomains (e.g., dGVzdA.attacker.com), sends via DNS query; attacker's DNS server decodes to obtain data. |
| Purpose | C2 communication (Command & Control), data exfiltration, bypassing Captive Portal (forced login page, e.g., airport / coffee shop Wi-Fi verification wall) |
| Detection Indicators | Abnormally long subdomain names, high-frequency DNS queries, abnormal TXT/NULL record query ratio, sudden surge in query volume for a single domain |
| Common Tools | iodine, dnscat2, dns2tcp |
| Defense | DNS traffic deep inspection (DPI), restrict internal DNS to forward only to trusted recursive resolvers, monitor DNS query length and frequency anomalies |
Cross-Platform Attack Technique Differences (Windows vs Linux)
Privilege Escalation
| Aspect | Windows | Linux |
|---|---|---|
| Common Techniques | Token Manipulation (impersonating high-privilege users), UAC Bypass, unquoted service paths (path contains spaces but not quoted, can be hijacked by malicious executables), DLL Side-Loading | SUID/SGID special execution permission abuse, sudo configuration errors (allowing execution of commands that shouldn't be open), kernel vulnerability exploitation, Cron job configuration errors, writable PATH directories |
| Key Differences | Windows manages privileges via identity tokens and Access Control Lists (ACLs), attackers often obtain system-level control (SYSTEM) by impersonating high-privilege tokens | Linux manages privileges via User ID (UID) and Group ID (GID), programs with special execution bits (SUID) are the most common targets for exploitation |
| Detection Focus | Monitor abnormal use of high-risk system privileges (e.g., debugging, impersonation) | Regularly scan for files with abnormal SUID/SGID special permissions |
Persistence
| Aspect | Windows | Linux |
|---|---|---|
| Common Techniques | Registry startup items (keys executed automatically on system boot), Scheduled Tasks, Startup Folder, WMI event subscription, service creation | crontab scheduling, Shell initialization scripts (.bashrc / .profile), systemd services, SSH authorized key implantation |
| Key Differences | Registry is a Windows-unique persistence vector, diverse and highly covert | Linux persistence mostly relies on Shell initialization scripts or scheduling tools |
| Detection Focus | Monitor changes to startup-related registry keys, scheduled task changes | Monitor crontab changes, new systemd service files, SSH authorized key changes |
Lateral Movement
| Aspect | Windows | Linux |
|---|---|---|
| Common Techniques | PsExec (remote command execution), WMI (Windows Management Instrumentation), RDP (Remote Desktop), WinRM (Windows Remote Management), DCOM (Distributed Component Object Model) | SSH, Ansible / Salt configuration management tools, stolen SSH private keys, NFS network shares |
| Key Differences | Windows domain environments have many built-in remote management tools, attackers can borrow existing management infrastructure without bringing in external tools | Lateral movement in Linux environments mainly relies on SSH; obtaining private keys allows spreading across multiple hosts |
| Detection Focus | Monitor abnormal connections for file sharing (SMB) and Remote Procedure Call (RPC) | Monitor abnormal SSH login sources, changes to SSH authorized key files |
Linux Privilege Escalation Detection Commands
# Find all SUID files (may be exploited by attackers to execute as root)
find / -perm -4000 -type f 2>/dev/null
# Find all SGID files
find / -perm -2000 -type f 2>/dev/null
# Find both SUID and SGID
find / -perm /6000 -type f 2>/dev/null
# Check sudo configuration (which commands can be executed as root without password)
sudo -l
# Find writable cron directories and scripts
find /etc/cron* -writable -type f 2>/dev/null
ls -la /etc/cron.d/ /var/spool/cron/
# Find world-writable directories (may be used for PATH Hijacking)
find / -writable -type d 2>/dev/null | grep -v proc
# Check if /etc/passwd is writable (rare but fatal)
ls -la /etc/passwd /etc/shadowCredential Theft and Lateral Movement Techniques
Pass-the-Hash (PtH)
Windows NTLM authentication protocol adopts a Challenge-Response mechanism; the server only needs to compare the MD4 hash of the password to complete verification, without needing the plaintext password. After obtaining the hash, the attacker can directly impersonate a legitimate user to log into other hosts without cracking the original password.
NTLM Background
The default authentication protocol before Windows 2000, later replaced by Kerberos, still retained for fallback, used in workgroup environments, local account authentication, and backup scenarios where Kerberos cannot be used.
- Prerequisite: Obtain NTLM hash from local password database (SAM, Security Account Manager) or memory.
- Impact: Obtain domain administrator hash, can move laterally throughout the Windows domain.
- Defense: Disable NTLM authentication (switch to Kerberos), enable Credential Guard (prevents reading hashes in memory), restrict login scope of privileged accounts.
Pass-the-Ticket (PtT)
Kerberos authentication uses tickets instead of passwords. After stealing the ticket, the attacker directly injects it into their own connection, accessing resources as the ticket holder without knowing the password or hash.
- Difference from PtH: PtH uses NTLM hash; PtT uses Kerberos tickets. After enabling Kerberos, PtH is invalid, attackers switch to PtT.
- Golden Ticket: After obtaining the password hash of the special account (krbtgt) responsible for signing all tickets in Kerberos, one can forge any user's TGT (Ticket-Granting Ticket), equivalent to obtaining the master key for the entire domain.
- Silver Ticket: Only forges the service ticket for a single service, scope is more limited than Golden Ticket, but harder to detect because it does not need to request from the Kerberos server.
- Defense: Regularly reset krbtgt account password (must reset twice consecutively to completely invalidate), monitor tickets with abnormally long validity, deploy Privileged Access Management (PAM).
krbtgt Account and Two-Reset Reason
krbtgt is the built-in service account of the AD domain. KDC uses its password hash to encrypt and sign all TGTs, making it the most sensitive secret in the entire domain. For compatibility, AD retains both the "current password" and the "previous password" of krbtgt, and accepts both when verifying tickets. Therefore, resetting only once leaves the stolen old password as the "previous password," and the forged Golden Ticket remains valid; only after the second reset is the old password completely erased from AD, and the Golden Ticket truly expires. Between the two resets, a period of time must elapse (roughly equal to the default TGT validity of 10 hours) to allow existing legitimate tickets to naturally expire, avoiding normal service interruption.
Kerberoasting
Kerberos allows any domain user who has logged in to request a service ticket from the Key Distribution Center (KDC), and the ticket itself is encrypted with the password hash of the service account. The attacker requests the ticket and takes it offline for brute-force cracking to restore the service account's plaintext password.
- Why it works: Service accounts (e.g., database service accounts) often do not change passwords for a long time, and password strength is insufficient; any authenticated domain user can request a service ticket (SPN, Service Principal Name), without special privileges.
- Attack steps: Enumerate accounts with SPN → Request service ticket → Take offline for brute-force cracking.
- Defense: Service accounts use random passwords of 25+ characters; switch to gMSA (Group Managed Service Account, automatically manages password rotation by Active Directory); monitor abnormal behavior of requesting large numbers of tickets in a short time.
DCSync (AD Replication Credential Theft)
The attacker uses the Active Directory directory replication protocol (MS-DRSR, Directory Replication Service Remote Protocol) to impersonate a Domain Controller (DC) and request account password hashes from other DCs, without needing to log into the DC or execute any programs on the DC.
- Prerequisite: Attacker account must have "Replicating Directory Changes (DS-Replication-Get-Changes-All)" permission, usually requiring Domain Admin or an account explicitly granted this permission.
- Typical use: Steal krbtgt account hash, further create Golden Ticket; or batch extract password hashes for the entire domain.
- Reason for difficulty in detection: Network traffic is no different from normal DC replication, carries no malicious tools, belongs to Living off the Land.
- Tool: Mimikatz
lsadump::dcsync /domain:corp.local /user:krbtgt. - Defense: Monitor replication requests from non-DC machines (Windows Event ID 4662); strictly restrict objects granted "DS-Replication-Get-Changes-All"; deploy Privileged Access Management (PAM).
Living off the Land (LOLBins)
Attackers do not carry their own tools, but use legitimate tools already present on the target system to execute malicious operations, because these tools have digital signatures and are built into the system, and will not trigger traditional antivirus software alerts.
| Platform | Term | Commonly Abused Tools | Malicious Use Example |
|---|---|---|---|
| Windows | LOLBins | certutil.exe, mshta.exe, regsvr32.exe, rundll32.exe, powershell.exe | Use certutil to download malicious files, use mshta to execute scripts (these are system built-in tools, antivirus won't block) |
| Linux | GTFOBins | curl, wget, python, perl, find, vim | Use find's SUID to escalate privileges to execute shell, use curl to download malicious scripts and execute directly |
- Why hard to detect: These tools are legitimate (administrators also use them), requiring context (who, when, from where, with what parameters executed) to determine if they are malicious.
- Relationship with Fileless Malware: Living off the Land is the main means of fileless attacks; malicious code exists only in memory, not landing as a file, making traditional antivirus even harder to detect.
- Defense: Application Whitelisting (AppLocker / WDAC), PowerShell Constrained Language Mode, Script Block Logging, EDR behavioral detection.
- Reference resources: LOLBAS Project (Windows), GTFOBins (Linux).
Detection Indicators and Emerging Threats
Indicators of Compromise (IOC) and Indicators of Attack (IOA)
| Comparison Aspect | IOC (Indicator of Compromise) | IOA (Indicator of Attack) |
|---|---|---|
| Nature | Forensic evidence left after an attack has occurred | Behavioral patterns while an attack is in progress |
| Time Point | Post-incident (Reactive) | Real-time (Proactive) |
| Typical Example | Malicious file Hash, C2 IP/Domain, malicious Registry Key, abnormal file path | Process injection behavior, abnormal PowerShell call patterns, account logging into multiple machines laterally in a short time |
| Corresponding Tool | Threat Intelligence Platform (TIP), SIEM matching rules, YARA rules | EDR / XDR behavioral analysis, UEBA (User and Entity Behavior Analytics) |
| Limitation | Attackers can easily change Hash/IP (low-cost avoidance) | Requires more mature detection capabilities and baseline establishment |
- IOC lifecycle is short (invalidated every time the attacker changes tools), but establishment cost is low.
- IOA focuses on "behavior" rather than "characteristics"; even if the attacker changes tools, the behavioral pattern remains similar, and detection effect is more durable.
- In practice, the two are complementary: IOC is used for quick matching of known threats, IOA is used for detecting unknown or new attacks.
- IOC is like "fingerprints at a crime scene," IOA is like "suspicious behavior on surveillance."
- Tactics and techniques described in the MITRE ATT&CK framework are essentially IOAs, which can serve as sources for Threat Hunting hypotheses.
Threat Intelligence and Security Assessment
Threat Hunting
Threat Hunting is a proactive, hypothesis-driven search method where security personnel actively look for potential threats that have not yet triggered alerts in the environment, complementing Incident Response (IR) that passively waits for SIEM alerts.
| Aspect | Threat Hunting | Incident Response |
|---|---|---|
| Trigger Method | Proactive (driven by hypothesis, no alert trigger needed) | Passive (triggered by SIEM / IDS alerts) |
| Purpose | Find potential attackers or IOCs not yet detected | Respond to confirmed known incidents |
| Applicable Scenario | APT lurking, new attack techniques, detection tool coverage blind spots | Containment, eradication, recovery after a security incident occurs |
Threat Hunting Process:
- Establish Hypothesis: Based on threat intelligence (CTI), MITRE ATT&CK tactics, or past incidents, propose a hypothesis that "an attacker might be using X technique."
- Collect and Search Data: Search for IOCs or IOAs in the hypothesis within EDR, SIEM, and log systems.
- Analyze and Identify: Distinguish malicious behavior from normal noise, confirm whether a real threat exists.
- Respond and Improve: If a threat is found, convert to Incident Response; if not found, convert the search logic into automated detection rules to continuously strengthen detection capabilities.
Diamond Model
The Diamond Model is a foundational framework for threat intelligence analysis, where every intrusion event can be described by four core dimensions, forming a diamond shape.
| Dimension | Description | Example |
|---|---|---|
| Adversary | The actor launching the attack | State-sponsored APT group, cybercrime syndicate |
| Capability | Tools, malware, vulnerability exploitation techniques used by the attacker | Cobalt Strike, Log4Shell vulnerability exploitation |
| Infrastructure | Physical resources controlled or used by the attacker | C2 server, compromised relay machine, malicious domain |
| Victim | Target of the attack | Specific enterprise, critical infrastructure, specific personnel |
- Core assumption of the model: Attackers need Capability via Infrastructure to affect the Victim; breaking any link can disrupt the attack.
- Complementary to MITRE ATT&CK: The Diamond Model describes "who, using what, via where, attacking whom" (overall relationship), ATT&CK describes "how it is done" (technical details).
- Community extension: Meta-diamond connects multiple related events, attributing them to the same APT group.
Threat Actor Classification Comparison Table
| Actor Type | English | Motivation | Skill Level | Resource Scale | Typical Attack Target |
|---|---|---|---|---|---|
| Script Kiddie | Script Kiddie | Show-off, curiosity | Low (uses off-the-shelf tools) | Individual | Random targets, website defacement |
| Cybercriminal | Cybercriminal | Money | Medium to High | Criminal group | Financial institutions, personal data |
| Hacktivist | Hacktivist | Political ideology, social justice | Medium | Organization/Collective | Government agencies, enterprise websites |
| Insider Threat | Insider Threat | Revenge, money, coercion | High (possesses internal privileges) | Individual | Employer systems and data |
| Nation-State | Nation-State | National interest, espionage | Extremely High | National resources | Critical infrastructure, government, military-industrial enterprises |
| APT | APT (Advanced Persistent Threat) | Long-term lurking, intelligence gathering | Extremely High | National or large organization | High-value targets, supply chain |
Threat Intelligence Source Classification
| Type | Description | Representative Organization/Resource |
|---|---|---|
| Commercial Intelligence | Paid subscription threat intelligence services and OSINT tools | Commercial security vendors, CVE, MITRE ATT&CK, VirusTotal |
| Government Intelligence | Threat notifications released by national-level CERTs and law enforcement agencies | TWCERT/CC, US-CERT, NCSC |
| Community Intelligence | Industry ISAC and open-source intelligence sharing platforms | Financial FS-ISAC, Energy E-ISAC, MISP |
Threat Intelligence Sharing Classification: TLP 2.0
CISA (Cybersecurity and Infrastructure Security Agency) adopts TLP (Traffic Light Protocol) 2.0 as the standard classification for threat intelligence sharing, using colors to indicate the scope of information dissemination. TLP is not limited to media; it can be applied to emails, reports, and meeting presentations.
| Label | Dissemination Target | Description |
|---|---|---|
| TLP:RED | Original recipients only | Must not be forwarded to non-original recipients, must not exceed the original sharing context (e.g., attendees at the meeting). |
| TLP:AMBER+STRICT | Within the original recipient's organization | Transmitted within the organization on a need-to-know basis, cannot cross organizations, stricter limits than AMBER. |
| TLP:AMBER | Original recipient's organization and direct customers | Can be transmitted within the organization on a need-to-know basis, and can be extended to direct business customers. |
| TLP:GREEN | Same security community | Can be widely transmitted within the same security community (e.g., ISAC, specific security forums), cannot be publicly released on the Internet. |
| TLP:CLEAR | No restrictions | No dissemination restrictions, can be shared publicly or released on the Internet. |
TLP 2.0 Key Points
- Restrictions from loose to strict: CLEAR → GREEN → AMBER → AMBER+STRICT → RED.
- TLP:AMBER vs TLP:AMBER+STRICT: AMBER allows transmission to direct customers, AMBER+STRICT is limited to within the organization.
- Main difference from 2.0 vs 1.0: 2.0 changed "WHITE" to "CLEAR," added "AMBER+STRICT," and explicitly defined the boundaries of "Community."
Threat Intelligence Standards (STIX / TAXII / CVE / CVSS)
Threat Intelligence Four Classifications (by user level)
The four levels are arranged by abstraction, corresponding to "trend judgment → attack warning → technique analysis → indicator blocking" decision needs from high to low.
| Type | English | Target User | Typical Example |
|---|---|---|---|
| Strategic Intelligence | Strategic Intelligence | Senior management, CISO, Board of Directors | Threat trends, attacker motivation, geopolitical risks |
| Operational Intelligence | Operational Intelligence | SOC analysts, CSIRT | Details of specific attack activities in progress or imminent, targets |
| Tactical Intelligence | Tactical Intelligence | Security architects, red teams | Attacker TTPs, MITRE ATT&CK mapping |
| Technical Intelligence | Technical Intelligence | Firewall/SIEM administrators | Specific IOCs (IP address, malicious Hash, malicious Domain) |
💡 Terminology Quick Check
STIX / TAXII:
STIX defines the structured format of threat intelligence, TAXII is responsible for transmission, and the two are usually used together, forming the basis for automated exchange of threat intelligence.
| Item | English | Description |
|---|---|---|
| STIX 2.1 | Structured Threat Information eXpression | Expresses threat intelligence in JSON format, covering object types such as attack patterns, malicious indicators IOC, attacker organizations, etc. |
| TAXII | Trusted Automated eXchange of Indicator Information | Transmission protocol for STIX data, defines two exchange modes: Collection (pull) and Channel (push) |
CVE / CWE / CVSS:
CVE records specific vulnerability events, CWE describes the type of weakness behind the vulnerability, CVSS measures its severity, and the three together form the basis of vulnerability management vocabulary.
| Item | English | Description |
|---|---|---|
| CVE | Common Vulnerabilities and Exposures | Vulnerability unique numbering system, format CVE-Year-SequenceNumber (e.g., CVE-2024-12345). Maintained by MITRE, NVD (maintained by NIST) provides search and detailed analysis |
| CWE | Common Weakness Enumeration | Software weakness classification system maintained by MITRE, describes weakness types (e.g., SQL Injection, Buffer Overflow), not specific vulnerability events |
| CVSS | Common Vulnerability Scoring System | Vulnerability severity score (0.0–10.0), maintained by FIRST |
CVSS v4.0 (November 2023, released by FIRST) is the current version, Base Score is divided into three groups totaling 11 metrics:
Exploitability Metrics
| Metric | Abbreviation | Options (score high to low) | Description |
|---|---|---|---|
| Attack Vector | AV | Network > Adjacent > Local > Physical | Same as v3.x |
| Attack Complexity | AC | Low > High | Difficulty for attacker to actively evade existing defense conditions |
| Attack Requirements | AT | None |
