On this page

Skip to content

iPAS Exam Preparation Notes - Information Security Engineer

While preparing for iPAS-related topics recently, I took the opportunity to organize the concepts involved in the Information Security Engineer (Junior) certification. While working through practice questions generated by Gemini Gem, I found that some terms and distinctions are easily confused, and much of the content is inherently aligned with engineering practice. Therefore, I decided to compile this into a note set focused on understanding and practical application.

The following are just my personal notes; the content does not aim to fit perfectly within any specific question bank. I have deliberately increased the difficulty of the prompts for Gemini Gem, so some sections also include background, tools, or contexts that often appear together in practice.

These notes are a collection of thoughts as they come to me, and I may supplement them at any time while continuing to practice questions. Additionally, there are many code and command sections in the notes, which are included for ease of understanding or as a reference when needed, so I have used details tags to collapse them.

Basic Concepts

Fundamental Principles and Terminology of Information Security

CIA Triad (Core)

ConceptEnglishDescriptionControl Measures
ConfidentialityConfidentialityEnsure information is accessed only by authorized partiesEncryption, access control, data classification
IntegrityIntegrityEnsure information is not modified without authorizationHash verification, digital signatures, version control
AvailabilityAvailabilityEnsure authorized users can access information in a timely mannerBackup, disaster recovery, load balancing

Extended Security Attributes

ConceptEnglishDescriptionControl Measures
AuthenticityAuthenticityVerify the authenticity of the information sourceDigital signatures, PKI, multi-factor authentication
Non-repudiationNon-repudiationEnsure actions cannot be denied after the factDigital signatures, audit logs, timestamp services
AccountabilityAccountabilityEnsure actions can be traced to a specific individualIdentity authentication, access logs, separation of duties

AAA Framework (Authentication, Authorization, Accounting)

ElementEnglishDescriptionControl Measures
AuthenticationAuthenticationVerify user identity ("Who are you?")Account passwords, MFA/FIDO2, biometrics
AuthorizationAuthorizationDetermine accessible resources ("What can you do?")RBAC, ABAC, OAuth 2.0
AccountingAccountingRecord user actions for tracking ("What did you do?")Audit logs, SIEM, NetFlow

CIA vs AAA

  • CIA describes the security attributes information should possess (protection goals); AAA describes the three stages of access control (implementation mechanisms). The two are complementary.

Asset Valuation Criteria

Value TypeEvaluation FactorsQuantification MethodExample
Direct CostReconstruction, procurement costsMonetary valueSoftware license fees, hardware equipment costs
Indirect CostOperational disruption, reputational damageEstimated revenue lossLoss per hour of system downtime
Legal CostRegulatory compliance, risk of finesPotential penaltiesGDPR fines up to 4% of annual turnover
Competitive ValueIntellectual property, trade secretsCompetitive advantage assessmentR&D results, customer lists

Defense in Depth

An architectural strategy of multi-layered security controls, ensuring that even if one layer is breached, other layers still provide protection.

LayerControl Measures
Governance LayerPolicy, Security Awareness Training
Physical SecurityAccess Control, CCTV
Network PerimeterFirewall / IPS (Intrusion Prevention System) / SDP (Software Defined Perimeter)
Internal NetworkZero Trust / NAC (Network Access Control)
Host SecurityEDR (Endpoint Detection and Response) / AppLocker (Application Whitelisting)
Application SecurityWAF (Web Application Firewall) / RASP (Runtime Application Self-Protection) / SSDLC (Secure Software Development Lifecycle)
Data SecurityEncryption / DLP (Data Loss Prevention)

Security Governance Document Hierarchy

LevelEnglishNatureExample
PolicyPolicyExplains "what to do" and "why," without specifying technical details. Approved by top management, mandatory, violation constitutes non-compliance."All data transmission must be encrypted"
StandardStandardExplains the "minimum technical threshold to meet policy requirements," mandatory. Specifies specific versions, algorithms, or settings; violation constitutes non-compliance."Use TLS 1.2 or higher"
ProcedureProcedureExplains "how to execute," listing repeatable operational steps. Mandatory, personnel must follow steps; deviation requires formal approval."Account application SOP"
GuidelineGuidelineProvides recommended practices, non-mandatory. Personnel can judge whether to adopt based on context; deviation does not constitute non-compliance."Recommended password length of 12+ characters"

Information Ethics: PAPA Theory

Proposed by scholar Richard Mason in 1986, defining four ethical issues in the information age.

AbbreviationIssueCore Question
P (Privacy)PrivacyIndividuals have the right to decide whether to disclose their own information
A (Accuracy)AccuracyResponsibility for the authenticity and correctness of information
P (Property)PropertyOwnership of information intellectual property rights
A (Accessibility)AccessibilityUnder what conditions is one qualified to access information

Difference between Accessibility and Availability

Accessibility ≠ Availability: The former is "who is qualified to use it," the latter is "can the system be used."

Information Asset Classification Standards

LevelEnglishStandardTypical Example
PublicPublicDisclosure causes no harmMarketing materials on the company website, announced financial reports
InternalInternalDisclosure causes no major damage, but not actively leakedInternal company website operational procedure documents
ConfidentialConfidentialDisclosure may damage the enterpriseTrade secrets, unannounced product R&D plans
PrivatePrivateDisclosure may damage others' privacyEmployee ID numbers, customer credit card numbers, etc.

Government Data Classification

In Taiwan, the government classifies confidential data from high to low according to the "National Classified Information Protection Act": Absolute Secret → Top Secret → Secret. General official documents are classified as "Secret" or "General" according to the "Document Processing Manual." Similar to the four-level enterprise classification logic, the core difference is that government classification focuses on national security impact, while enterprise classification focuses on commercial damage.

Asset Management Roles

RoleEnglishTypical HolderResponsibility
Asset OwnerAsset OwnerBusiness Unit ManagerDetermines classification level, approves access authorization, bears final security responsibility
Asset CustodianAsset CustodianIT DepartmentImplements storage, backup, access control, and other technical measures according to owner instructions; no authority to adjust levels independently

Key Points for Asset Classification

  • Level adjustments (including upgrades and downgrades) must be decided by the Asset Owner according to organizational policy and risk assessment procedures and cannot be changed arbitrarily; downgrades must be documented in writing for audit purposes.
  • Personally Identifiable Information (ID card, credit card number) → Private; Trade secrets → Confidential.

Differences from Information Security Roles

Similar to the Owner/Custodian direction in asset management, but the legal framework is different and cannot be directly equated:

  • Data Controller vs Asset Owner: The legal responsibility of the controller is defined by GDPR / Personal Data Protection Act, and violations can be penalized by competent authorities; the responsibility of the Asset Owner comes from internal organizational policy.
  • Data Processor vs Asset Custodian: A written Data Processing Agreement (DPA) is required between the processor and the controller; the relationship between the custodian and the owner is an internal division of duties and does not require external contracts.

Regulations and Compliance

ISO/IEC 27001 and Basic ISMS Requirements

ISO/IEC 27000 Series Standards

StandardTopicCertifiable?Key Description
ISO/IEC 27000Overview of terms and definitions❌ Not certifiableDefines terms and concepts used throughout the 27000 series; the basic dictionary for reading other standards
ISO/IEC 27001Information Security Management System (ISMS) Requirements✅ Can apply for 3rd party certificationSpecifies the requirements that an organization must (SHALL) establish, implement, and maintain for an ISMS; the core of the series certification
ISO/IEC 27002Information Security Control Measures Guidelines❌ Not certifiableProvides implementation suggestions (SHOULD) for control measures in Annex A of 27001; an operational manual, not a specification
ISO/IEC 27003ISMS Implementation Guidance❌ Not certifiableExplains how to implement 27001 clauses, providing implementation examples and recommended practices
ISO/IEC 27004Information Security Measurement and Evaluation❌ Not certifiableProvides design methods for measurement indicators, corresponding to 27001 Clause 9 (Performance Evaluation), assisting organizations in evaluating ISMS effectiveness
ISO/IEC 27005Information Security Risk Management❌ Not certifiableProvides guidance on the risk management process (Identification → Assessment → Treatment → Monitoring), providing a methodology for 27001 risk assessment
ISO/IEC 27006Requirements for Certification Bodies✅ (Applicable to the certification body itself)Specifies the conditions that third-party bodies performing ISMS audits and certifications must meet; not applicable to general organizations
ISO/IEC 27007ISMS Audit Guidance❌ Not certifiableProvides methodology for performing ISMS internal audits and third-party audits, supplementing ISO 19011 for information security audit scenarios
ISO/IEC 27017Cloud Service Security ControlsDepends on certification bodyAdditional control guidance for cloud providers and tenants, supplementing 27002 for cloud scenarios
ISO/IEC 27018Public Cloud PII ProtectionDepends on certification bodyGuidance for protecting personal data in cloud environments, in line with the spirit of GDPR

ISO/IEC 27001 and SoA Key Points

  • Clauses 4–10 of 27001 are mandatory (SHALL); organizations must meet all of them to obtain certification.
  • Passing 27001 certification = ISMS management system meets the standard; 27002 is a reference manual for "how to do it" and is not certifiable itself.
  • The 93 control measures in Annex A do not all require implementation; organizations select applicable items based on risk assessment results and record the selection rationale or exclusion explanation in the Statement of Applicability (SoA).
  • 27002 provides implementation suggestions (SHOULD) for each control measure; practices can differ from 27002, but this must be explained in the SoA.
  • ISO only publishes standards and does not issue individual qualifications; Lead Auditor certificates are issued by personnel certification bodies (such as IRCA, PECB) in accordance with ISO/IEC 17024.

ISMS Core Elements

ElementEnglishDescriptionSpecific Requirements
ContextContextUnderstand the organizational environment and stakeholder needsIdentify internal/external issues, regulatory requirements, stakeholder expectations
LeadershipLeadershipCommitment and participation of senior managementEstablish security policy, assign security responsibilities, provide resources
PlanningPlanningRisk assessment and goal settingPerform risk assessment, formulate risk treatment plans, set measurable security goals
SupportSupportProvide necessary resources and capabilitiesStaffing, education and training, documented procedures, internal communication
OperationOperationImplement and run the ISMSExecute risk treatment measures, control measure operation, supplier management
Performance EvaluationPerformance EvaluationMonitoring and measurementInternal audit, management review, performance indicator monitoring
ImprovementImprovementContinuous improvement mechanismNon-conformity handling, corrective actions, preventive actions

ISO 27001 Annex A Control Measure Categories

TopicEnglishNumber of ControlsCoverage
OrganizationalOrganizational37Security policy, roles and responsibilities, asset management, supplier relationships, incident management
PeoplePeople8Pre-employment screening, security awareness training, disciplinary procedures, termination procedures
PhysicalPhysical14Secure areas, equipment protection, cabling security, media disposal
TechnologicalTechnological34Access control, encryption, network security, secure development, vulnerability management
  • Total of 93 control measures (the 2013 version had 114; after the 2022 revision, 11 were added and others were merged/streamlined).
  • New controls include: Threat Intelligence, Cloud Service Security, Data Masking, Monitoring Activities, etc.

ISMS Effectiveness Indicators

Indicator TypeExample IndicatorTarget Value Reference
Preventive EffectSecurity awareness training completion rate, vulnerability patching timeTraining completion rate > 95%, high-risk vulnerability patching < 72 hours
Detection CapabilityMean Time to Detect (MTTD), false positive rateMTTD < 24 hours, false positive rate < 5%
Response EfficiencyMean Time to Respond (MTTR), incident resolution rateMTTR < 4 hours, resolution rate > 98%
Compliance LevelAudit finding improvement rate, control measure effectivenessMajor findings 100% improved, control effectiveness > 90%

ISMS PDCA Cycle

ISO/IEC 27001 adopts the PDCA cycle to ensure continuous improvement of the ISMS:

StageEnglishCore Task
PlanPlanEstablish ISMS policy, goals, and risk assessment processes; formulate risk treatment plans
DoDoImplement and run ISMS policies, control measures, and procedures
CheckCheckEvaluate ISMS performance, perform internal audits and management reviews, report results to management
ActActTake corrective and preventive actions based on audit results to promote continuous ISMS improvement

Audit Types (Three Parties) Comparison Table

TypeEnglishDescriptionTypical Example
1st Party Audit1st PartyInternal audit, organization performs on itselfInternal security audit performed by the company itself
2nd Party Audit2nd PartyExternal provider audit; competent authority audits subordinate institutionsFinancial Supervisory Commission audits banks under its jurisdiction; customers audit suppliers
3rd Party Audit3rd PartyPerformed by an independent verification/certification body, can issue certification certificatesISO 27001 certification audit

Judging 2nd Party vs 3rd Party Audits

  • Only 3rd party audits can issue external certification certificates (e.g., passing ISO 27001 certification).
  • Competent authority audits (e.g., Financial Supervisory Commission checking banks) = 2nd party, not 3rd party.
  • Common misconception: It feels like the competent authority is an "external independent third party," but in ISO definitions, external parties with a vested interest (regulatory agencies, customers) are all considered 2nd party.

Third-Party Audit Certification Comparison Table

Certification / ReportNatureAudit ScopeCharacteristics
SOC 2 Type 13rd Party Audit ReportWhether system design at a specific point in time meets Trust Services Criteria (TSC)Point-in-time, only proves design is reasonable, does not verify actual operational effectiveness
SOC 2 Type 23rd Party Audit ReportWhether system operation over a period of time (usually 6–12 months) meets TSCPeriod-of-time, verifies control measures are continuously and effectively operating; more persuasive than Type 1
ISO 27001 Certificate3rd Party CertificationWhether the ISMS management system meets ISO 27001 standardsAnnual surveillance audit + recertification every three years; focuses on "management system" rather than technical control operational details
PCI DSS AoCAttestation of ComplianceWhether the Cardholder Data Environment (CDE) meets PCI DSS requirementsApplicable to organizations handling credit card transactions, requirements vary greatly by level (Level 1 requires QSA on-site audit)

SOC 2 Type 1 vs Type 2

  • Type 1 = Blueprint Review: Control measures are reasonably designed, but it has not been verified whether they are actually being executed.
  • Type 2 = Actual Acceptance: During an observation period, control measures are indeed operating effectively.
  • Cloud service providers (e.g., AWS, Azure) usually obtain SOC 2 Type 2 for enterprise customers to use in supplier risk assessments.

NIST CSF and NIST SP 800 Series

NIST CSF (Cybersecurity Framework)

A voluntary cybersecurity framework released by the US NIST (National Institute of Standards and Technology), widely used across industries. CSF 2.0 (released in 2024) added the Govern function, emphasizing governance, defining six core functions, and providing a structured method for organizations to assess and improve their security posture.

FunctionEnglishDescriptionCorresponding Activities
GovernGovern (GV)Establish security governance structure and strategy to ensure security risk management aligns with organizational goals (Added in CSF 2.0)Security policy formulation, role and responsibility assignment, supply chain risk management
IdentifyIdentify (ID)Inventory organizational assets, business environment, and risksAsset management, risk assessment, supply chain identification
ProtectProtect (PR)Implement control measures to protect critical assetsAccess control, data security, security awareness training, platform security
DetectDetect (DE)Timely discovery of security incidentsContinuous monitoring, anomaly analysis, incident detection procedures
RespondRespond (RS)Take action on confirmed incidentsIncident management, analysis, notification, corrective actions
RecoverRecover (RC)Restore affected services to normalRecovery plan execution, improvement measures, external communication

NIST CSF vs ISO 27001

  • NIST CSF: Voluntary framework, no certification system, focuses on "what to do" (function-oriented), suitable as a starting point for security maturity assessment.
  • ISO 27001: Can apply for 3rd party certification, focuses on "how to manage" (management system-oriented), suitable for organizations that need to prove compliance externally.
  • Complementary: Organizations can use NIST CSF to assess the gap between current status and goals, and then use ISO 27001 to establish a certifiable management system.
  • CSF 2.0 added the "Govern" function, emphasizing that security governance should be led by senior management, which is consistent with the spirit of ISO 27001 management review.

NIST SP 800 Series

A series of Special Publications released by NIST, covering technical guidance and control specifications for various security topics, primarily for US federal agencies to follow for FISMA compliance, but also widely referenced by non-federal organizations.

DocumentTopicKey Description
SP 800-37Risk Management Framework (RMF)Defines a seven-step risk management process (Prepare, Categorize, Select, Implement, Assess, Authorize, Monitor), the main compliance basis for federal agencies
SP 800-53Security and Privacy ControlsHundreds of control measures classified into 20 control families (e.g., AC Access Control, IR Incident Response), the implementation basis for SP 800-37
SP 800-61Computer Security Incident Handling Guide(Rev. 3, 2025) Incorporates incident response into the overall risk management context of CSF 2.0; IR activities span all six Functions (Govern, Identify, Protect, Detect, Respond, Recover)
SP 800-63Digital Identity GuidelinesSpecifies Identity Assurance Levels (AAL), Identity Proofing Levels (IAL), and Federation Assurance Levels (FAL)
SP 800-88Guidelines for Media Sanitization(Rev. 2, 2025) Divides media sanitization into three levels: Clear / Purge / Destroy, providing a decision framework for choosing sanitization methods based on data sensitivity and device type
SP 800-171Protecting CUISecurity requirements for non-federal systems processing Controlled Unclassified Information (CUI), commonly used for government supply chain compliance
SP 800-207Zero Trust ArchitectureDefines seven Zero Trust Tenets and the PE/PA/PEP logical architecture components, providing a reference architecture for federal agencies to migrate to Zero Trust

NIST CSF vs SP 800 Series

  • CSF: Defines "what results to achieve" (high-level function-oriented), suitable for assessing and communicating security posture.
  • SP 800 Series: Defines "what specific controls and processes to implement" (detailed technical-oriented), suitable for federal compliance and detailed implementation planning.
  • Complementary: CSF is the map, SP 800 series is the construction specification for each area.

COBIT Governance Framework

COBIT (Control Objectives for Information and Related Technologies)

An IT governance framework released by ISACA; the current version is COBIT 2019. The core question is "Is IT doing the right things, and is it doing them compliantly?", aimed at management and auditors.

COBIT divides IT activities into two levels:

LevelEnglishDescription
GovernanceGovernanceSet direction, evaluate options, monitor execution; responsible by the Board or senior management
ManagementManagementPlan, build, run, monitor specific activities; responsible by IT management

Common use cases: IT audit, SOX compliance (Sarbanes-Oxley Act), and combined with ISO 27001 as a governance reference.

ITIL Service Management Framework

ITIL (Information Technology Infrastructure Library)

An IT service management framework released by PeopleCert (formerly AXELOS); the current version is ITIL 4 (2019). The core question is "Is IT delivering services well and operating stably?", aimed at IT operations and service management teams.

ITIL 4 centers on the Service Value System (SVS), containing 34 management practices, divided into three categories by function:

CategoryDescriptionRepresentative Practices
General Management PracticesManagement activities common across the organizationRisk management, information security management, knowledge management
Service Management PracticesManagement activities specific to IT servicesIncident management, problem management, change control, service desk
Technical Management PracticesTechnology-oriented management activitiesInfrastructure management, deployment management

Intersection with security: Incident Management, Problem Management, and Change Control processes overlap highly with security incident handling and vulnerability patching processes.

COBIT vs ITIL

  • COBIT: IT governance framework, answers "Are we doing the right things?", aimed at management and auditors.
  • ITIL: IT service management framework, answers "Are we doing services well?", aimed at operations teams.
  • The two are parallel, with no hierarchical relationship. In practice, they can be used together: COBIT sets governance goals, ITIL implements service processes.

GDPR and Taiwan Personal Data Protection Act Comparison Table

AspectGDPR (EU)Taiwan Personal Data Protection Act
ApplicabilityOrganizations processing PII of EU residents (not limited by geography; Taiwan enterprises serving EU users also apply)Public and non-public agencies collecting, processing, or using PII within Taiwan
PII DefinitionAny information that can directly or indirectly identify a natural personName, date of birth, ID number, etc., that can directly or indirectly identify an individual
Core PrinciplesLawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentialityLegitimate purpose, necessity principle, data subject consent (written consent as a principle)
Data Subject RightsAccess, rectification, erasure (right to be forgotten), portability, object to processing, restriction of processingInquiry, review, copy, supplement, rectification, stop collection/processing/use, erasure
Data Breach NotificationNotify competent authority (DPA) within 72 hoursNo unified statutory time limit, notify as soon as possible according to competent authority requirements
Maximum Penalty€20,000,000 or 4% of global annual turnover (whichever is higher)Non-public agencies: Fine of up to NT$15 million
Data Protection Officer (DPO)Mandatory in specific circumstancesNo mandatory requirement
Cross-border TransferMust ensure the receiving country has an adequate level of protection (e.g., SCCs standard contractual clauses, adequacy decision)Must be based on a specific purpose and protected by the laws of the receiving country, or obtain data subject consent

TIP

  • Extraterritorial effect of GDPR: The trigger condition is not "where the organization is," but "whether it actively provides services to EU residents or monitors their behavior" (GDPR Art. 3(2)). Practical judgment criteria include: whether a local EU language is provided, whether Euro payments are accepted, whether the EU market is explicitly mentioned in marketing, etc. If the service has no geographic identification mechanism and treats all regions equally, it depends on whether there is an "intent to actively reach EU users," not automatically applicable just because there are EU users.
  • The "right to be forgotten" and "data portability" are the most obvious differences between GDPR and Taiwan's Personal Data Protection Act.
  • GDPR penalties are far higher than Taiwan's Personal Data Protection Act, so for enterprises with large business scales in the EU, GDPR compliance priority is usually higher.

Taiwan Personal Data Protection Act: Notification Obligation (Articles 8, 9)

The timing of notification when collecting PII depends on the collection method:

Collection MethodArticleTiming of Notification
Direct Collection (obtained from the data subject, e.g., filling out a form)Article 8At the time of collection
Indirect Collection (obtained from a third party, not the data subject)Article 9Upon first use of the PII

The notification content is the same for both cases: name of the collecting agency, purpose of collection, PII category, period/region/object/method of use, and rights the data subject can exercise.

Advanced Privacy Concepts

Data Sovereignty: Data is subject to the laws of the country where it is stored. When an organization uses cloud services, if data is stored on servers in other countries, it may be subject to the laws of multiple countries simultaneously (e.g., the US CLOUD Act allows the US government to cross-border access data held by US companies).

DPIA (Data Protection Impact Assessment): Article 35 of the GDPR requires prior impact assessment for high-risk PII processing activities to identify privacy risks and formulate mitigation measures. Although Taiwan's Personal Data Protection Act has no explicit DPIA article, competent authorities encourage organizations to conduct them voluntarily.

Data Minimization and Purpose Limitation

These are two core concepts among the eight GDPR principles most often related to "collecting PII":

PrincipleEnglishDefinitionCommon Violation Case
Data MinimizationData MinimizationCollect only the PII necessary to achieve a specific purpose; do not collect more or excessivelyRequiring ID number, occupation, annual income, etc., unrelated to the service when applying for membership
Purpose LimitationPurpose LimitationPII can only be used for the purpose stated at the time of original collection; it cannot be used for other purposesPhone numbers collected for "customer service needs" are used for marketing SMS
  • Complementary: Purpose limitation determines "where it can be used," data minimization determines "how much can be collected."
  • Violating purpose limitation is usually more serious than violating data minimization, because if already collected data is misused, it is difficult for the data subject to remedy.

Data Processing Roles

RoleEnglishTypical HolderResponsibility
Data ControllerData ControllerEnterprise or agency collecting PIIDetermines the purpose and method of PII processing, bears primary legal responsibility externally
Data ProcessorData ProcessorThird-party vendor entrustedExecutes PII processing according to controller instructions, must sign a Data Processing Agreement (DPA)

HIPAA Health Data Protection Regulations

HIPAA (Health Insurance Portability and Accountability Act)

A 1996 US federal regulation that mandates the privacy and security of Protected Health Information (PHI) in the healthcare industry. Applicable objects are divided into two categories:

  • Covered Entities: Healthcare providers, health insurance companies, health information exchange organizations.
  • Business Associates: Third-party vendors entrusted by covered entities to process PHI (e.g., cloud storage, billing system operators).
RuleEnglishCore Requirement
Privacy RulePrivacy RuleLimits the scope of PHI use and disclosure, requires patient authorization; grants patients the right to access and correct their own data
Security RuleSecurity RuleMandates administrative (policy procedures), physical (equipment access control), and technical (encryption access control) protection measures for electronic PHI (ePHI)
Breach Notification RuleBreach Notification RuleePHI breaches must be notified to affected individuals and HHS within 60 days; single incidents involving 500+ people must be notified to local media simultaneously

💡 Terminology Quick Check

  • PHI: Protected Health Information
  • ePHI: Electronic Protected Health Information
  • HHS: Department of Health and Human Services
  • HIPAA vs GDPR: HIPAA only applies to the healthcare industry and is a US regulation; GDPR applies across industries and covers all PII processing within the EU.

Major Frameworks and Regulations Quick Check

Framework/RegulationNatureFocusApplicable ObjectsMandatory?
ISO 27001Management System StandardInformation Security Management (ISMS)Universal (all industries)Voluntary (can apply for certification)
NIST CSFSecurity FrameworkSecurity Risk Management (6 Core Functions)Universal (US government priority)Voluntary
NIST SP 800 SeriesTechnical Guidance & Control SpecsSpecific implementation details for various security topics (including 800-53 controls)US federal agencies (non-federal can reference)Mandatory for federal agencies
COBITGovernance FrameworkIT Governance and Control ObjectivesIT governance layer, auditorsVoluntary (often used as audit benchmark)
ITILService Management FrameworkIT Service Delivery and OperationsIT operations/service management teamsVoluntary
GDPRRegulationPersonal Data ProtectionOrganizations processing PII of EU residentsMandatory
Taiwan Personal Data Protection ActRegulationPersonal Data ProtectionAgencies and enterprises collecting/processing PII within TaiwanMandatory
HIPAARegulationHealthcare Information ProtectionUS healthcare industryMandatory
PCI DSSIndustry StandardCredit Card Transaction Data SecurityOrganizations handling credit card transactionsMandatory (card organization requirement)
  • ISO 27001 vs NIST CSF: 27001 has a 3rd party certification system; CSF has no certification, focuses on maturity assessment.

Information Security Responsibility Level Operation Requirements Table (Levels A to E)

Responsibility levels decrease from A to E, with Level A being the strictest and Level E the most lenient.

Item / Responsibility LevelLevel ALevel BLevel CLevel DLevel E
Full-time Security Personnel4+2+1+0 (concurrently held by IT staff)0 (concurrently held by general staff)
ISMS VerificationEntire agency mandatory 3rd party verificationCore systems must pass 3rd party verificationCore systems must pass 3rd party verificationSelf-conducted according to competent authority regulationsSelf-conducted according to competent authority regulations
SOC (Security Operations Center) MonitoringMust be built, 24/7 monitoring for the entire agencyMust be built for core systemsMay be built based on agency size and risk needsNoneNone
Vulnerability ScanningOnce a year (entire agency)Once a year (core systems)Once a year (core systems)Once every 2 yearsNone
Penetration TestingOnce every 2 yearsOnce every 2 yearsOnce every 2 yearsNoneNone
Security Health CheckOnce a yearOnce a yearOnce every 2 yearsNoneNone
Security Education & Training (Hours/Year)Full-time personnel: 12 hoursFull-time personnel: 12 hoursFull-time personnel: 12 hoursConcurrently held IT staff: 3 hoursGeneral concurrently held staff: 3 hours

TIP

  • A vs B: The difference in ISMS verification and SOC monitoring is the scope; Level A covers the entire agency, Level B only targets core systems.
  • Level C is the watershed: The minimum threshold for penetration testing and security health checks; neither exists from Level D onwards.
  • Level D onwards switches to concurrent system: No full-time security personnel, concurrently held by IT staff, vulnerability scanning reduced to once every 2 years.
  • E vs D: Vulnerability scanning is still once every 2 years at Level D, and only completely cancelled at Level E; concurrently held personnel also drop from IT staff to general staff.

Risk Management

Differences between Vulnerability, Threat, and Risk

TermEnglishDescriptionAnalogy
VulnerabilityVulnerabilityA flaw in a system or process that can be exploitedThe door lock is broken
ThreatThreatAn event or actor that could exploit a vulnerability to cause damageA thief is active in the area
RiskRiskThe likelihood and impact of a threat exploiting a vulnerabilityThe probability and loss of being breached
  • Risk = Threat × Vulnerability × Asset Value. If any of the three is missing, the risk is reduced.

Detailed Risk Assessment and Treatment Process

StageInputActivityOutput
Asset IdentificationOrganizational structure, business processesInventory and classify information assetsAsset inventory, asset value
Threat/Vulnerability IdentificationAsset inventory, threat intelligenceIdentify applicable threats and existing vulnerabilitiesThreat list, vulnerability list
Risk AnalysisThreats, vulnerabilities, existing controlsAssess risk likelihood and impactInherent risk, residual risk
Risk AssessmentRisk value, risk appetiteDetermine risk acceptabilityRisk level, treatment priority
Risk TreatmentUnacceptable riskSelect and implement treatment measuresRisk treatment plan, control measures

Comparison of Risk Analysis Methods

AspectQualitative AnalysisSemi-Quantitative AnalysisQuantitative Analysis
Output FormatLevel description (e.g., High/Medium/Low, Risk Matrix)Relative score (Likelihood score × Impact score)Financial value (e.g., ALE = 120k/year)
Data RequirementExpert judgment, questionnaires, interviewsExpert judgment + rating scaleHistorical event data, statistical models
Analysis DifficultyLowerMediumHigher, requires statistical and financial analysis skills
SubjectivityHigh (depends on assessor experience)Medium (level mapping still has subjectivity)Low (calculated based on data)
Typical MethodRisk Matrix (Likelihood × Impact), Delphi MethodScore Matrix (High=5, Medium=3, Low=1 multiplied and sorted)ALE formula, Monte Carlo Simulation, FAIR model
Applicable ScenarioPreliminary screening, quick classification when resources are limitedLacks historical data but needs cross-project sorting and comparisonWhen financial decision basis is needed (e.g., ROSI Analysis)

Combining Qualitative and Quantitative Analysis

In practice, qualitative analysis is often used first to quickly screen high-risk items, and then quantitative analysis is performed on these items to produce financial data. Pure quantitative analysis is uncommon because it is difficult to obtain reliable historical occurrence rate data for many risks.

Risk Matrix

Presents Likelihood and Impact in a two-dimensional matrix; the intersection determines the risk level, assisting in prioritizing treatment. It is a qualitative tool; the output is a level label and does not involve financial values.

Likelihood ↓ / Impact →LowMediumHigh
HighMediumHighHigh
MediumLowMediumHigh
LowLowLowMedium

Delphi Expert Assessment Method

A structured expert consensus method used for risk assessment scenarios where historical data is lacking and one must rely on expert judgment.

  1. Send anonymous questionnaires to experts to collect individual opinions.
  2. Summarize and provide feedback to all experts.
  3. Experts revise their own judgments based on group feedback.
  4. Repeat iterations until opinions converge to a consensus.

The anonymous design aims to eliminate bandwagon effects and authority influence, ensuring each expert's judgment is independent.

Score Matrix Method

A semi-quantitative analysis tool that maps qualitative levels to organization-defined values and then multiplies them to produce a risk priority score. A common example is High=5, Medium=3, Low=1, but both the values and the number of levels can be adjusted according to needs; the scale must remain consistent within the same assessment.

Risk Priority = Likelihood Score × Impact Score; the result is only for relative sorting and does not represent the actual loss amount.

Risk Matrix vs Score Matrix

Both structures are the same (Likelihood × Impact), the difference is in the output format:

  • Risk Matrix (Qualitative): Outputs level labels (High/Medium/Low), used for quick classification
  • Score Matrix (Semi-quantitative): Outputs numerical scores, used for sorting priorities across projects

Risk Quantification Formulas

TermChineseDescription
ALE (Annualized Loss Expectancy)Annualized Loss ExpectancyAverage expected loss amount from a specific threat within one year, ALE=ARO×SLE
ARO (Annualized Rate of Occurrence)Annualized Rate of OccurrenceExpected number of times a threat event occurs within one year (e.g., 0.1 = once every 10 years)
SLE (Single Loss Expectancy)Single Loss ExpectancyExpected loss amount each time a threat event occurs, SLE=AV×EF
AV (Asset Value)Asset ValueMonetary value of the asset
EF (Exposure Factor)Exposure FactorPercentage of asset loss when a threat occurs (0–100%)

Formula Connection

ALE=ARO×SLE

SLE=AV×EF

ALE=ARO×AV×EF

Example: Server value 1 million (AV), estimated loss of 60% after being encrypted by ransomware (EF = 0.6), estimated occurrence once every 5 years (ARO = 0.2). → SLE = 1 million × 0.6 = 600,000 → ALE = 0.2 × 600,000 = 120,000/year

If the annual cost of protective measures is less than 120,000, it is cost-effective.

Monte Carlo Simulation and ALE

The traditional ALE formula assumes ARO and EF are fixed values, but in reality, these parameters have uncertainty ranges (e.g., "occurs every 3–7 years"). Monte Carlo simulation performs massive random sampling, taking possible values for each variable, executing calculations tens of thousands of times, and producing a probability distribution of ALE rather than a single number. It is the standard method for high-level Quantitative Risk Analysis (QRA).

For example, outputting: "There is a 90% probability that the annualized loss will not exceed 500,000," which is more valuable for decision-making than "expected loss of 200,000."

ROSI (Return on Security Investment)

ROSI measures the financial rationality of security control measures: How much loss can the investment save?

ROSI=(ALEbeforeALEafter)Annual Cost of Control MeasureAnnual Cost of Control Measure×100%
TermDescription
ALE_beforeAnnualized Loss Expectancy before implementing control measures
ALE_afterAnnualized Loss Expectancy after implementing control measures (after risk reduction)
Annual Cost of Control MeasureTotal cost of ownership of the security measure per year (license fees + labor + maintenance)

Example: Antivirus software annual fee 20,000, expected to reduce ALE from 150,000 to 30,000.

  • Loss saved: 150,000 − 30,000 = 120,000
  • Net benefit: 120,000 − 20,000 = 100,000
  • ROSI = 100,000 / 20,000 × 100% = 500% (every 1 unit invested saves 5 units of loss)

FAIR Model (Factor Analysis of Information Risk)

FAIR is an industry-mainstream quantitative risk analysis framework, an open standard maintained by The Open Group, which decomposes risk into a quantifiable factor tree structure, ultimately producing a probability distribution of loss.

TermDescription
Loss Event Frequency (LEF)Expected number of times a threat successfully causes loss within a certain time
Threat Event Frequency (TEF)Frequency of threat attempts (regardless of success or failure)
VulnerabilityProbability that a threat event converts into a loss event (the stronger the control measure, the lower this value)
Loss MagnitudeImpact scale of a single loss event, decomposed into Primary Loss and Secondary Loss (e.g., reputation damage, legal litigation)

FAIR vs Traditional ALE

  • The ALE formula is a "point estimate"; FAIR produces a conclusion of "there is an X% probability that the loss will not exceed Y amount" through factor decomposition and probability distribution, resulting in higher decision quality.
  • FAIR and ISO 27005/NIST CSF can be used complementarily: ISO 27005 defines the risk management process, FAIR provides quantitative analysis methods.
  • Adoption threshold: Requires collecting fine-grained threat and control measure data, suitable for organizations with higher security maturity.

Risk Treatment Strategy Comparison Table

Strategy (ISO 27005)EnglishDefinitionExampleApplicable Scenario
Risk AvoidanceRisk AvoidanceAbandon activities or assets that may trigger riskStop using insecure legacy protocols, abandon high-risk marketsRisk is too high and cannot be effectively reduced
Risk Modification (Reduction)Risk ModificationImplement control measures to reduce risk likelihood or impactDeploy firewall, implement MFA, encrypt transmissionRisk is within acceptable range, and control measure cost is reasonable
Risk SharingRisk SharingShare part of the consequences of risk with a third party, usually part of financial impact or contractual liabilityPurchase cyber insurance, outsource hosting (MSP/MSSP), SLA agreed compensation mechanismRisk impact is large, but impact can be dispersed through contracts or insurance
Risk Retention (Acceptance)Risk RetentionAcknowledge risk exists, take no additional measuresResidual risk is below risk appetite, patching cost far exceeds potential lossRisk is within tolerable range, or control cost is not cost-effective

ISO 27005 Risk Treatment Process

  1. Risk Assessment: Identification → Analysis → Evaluation.
  2. Based on assessment results, select a treatment strategy (Avoidance/Modification/Sharing/Retention) for each risk.
  3. Formulate a Risk Treatment Plan, recording the selected strategy and implementation schedule.
  4. Residual Risk must be formally approved for acceptance by management.

Risk Terminology Supplementary Comparison Table

TermEnglishDefinition
Inherent RiskInherent RiskThe original risk level before implementing any control measures, reflecting the natural exposure of assets to threats
Residual RiskResidual RiskThe remaining risk after implementing risk treatment measures, must be formally accepted by management
Secondary RiskSecondary RiskNew risks triggered by the risk response measures themselves (e.g., introducing monitoring systems to reduce security incident risk, but simultaneously creating employee privacy concerns)
Risk TransferRisk TransferTransferring specific parts of risk consequences to third parties through insurance, contracts, or compensation clauses
Risk AcceptanceRisk AcceptanceManagement is aware of and formally accepts the risk, takes no further treatment measures, applicable when residual risk is within risk appetite
Risk CapacityRisk CapacityThe maximum loss limit an organization can bear without jeopardizing survival, an objective financial/operational boundary (different from risk appetite: the latter is subjective willingness, the former is objective capability)
Risk AppetiteRisk AppetiteThe upper limit of risk an organization strategically chooses to actively bear, a subjective willingness decision, must be less than or equal to risk capacity
Risk ThresholdRisk ThresholdThe trigger level for a specific risk, requiring immediate response measures when exceeded, usually lower than risk appetite

Risk Sharing vs Risk Transfer

ComparisonRisk SharingRisk Transfer
ISO 27005 Term✅ Official termCommon colloquialism, not an official ISO 27005 term
DefinitionShare risk consequences with a third party, usually part of financial loss, service level breach liability, or operational impactTransfer specific risk consequences to a third party through insurance or contracts, usually emphasizing the transfer of "financial burden"
Substantive DifferenceOriginal risk and governance responsibility remain with the organization, only shared by a third partySemantically stronger, but in practice usually only transfers part of the financial consequences, legal liability does not disappear
ExampleOutsourcing: Service interruption loss shared by both parties according to SLA or compensation capPurchase cyber insurance: Insurance company absorbs part of the claim amount

The two are often used interchangeably in practice; ISO/IEC 27005 officially adopts Risk Sharing, Risk Transfer is a more common colloquialism. Regardless of the term used, insurance or outsourcing contracts can only transfer part of the financial consequences; compliance obligations and legal liabilities under GDPR, Personal Data Protection Act, etc., do not disappear as a result.

Risk Retention vs Risk Acceptance

ComparisonRisk RetentionRisk Acceptance
NatureRisk treatment strategy (one of four)Formal approval action by management
SourceISO 27005 treatment option list; ISO 27001 body does not list strategy namesISO 27001 Clause 6.1.3e directly requires; ISO 27005 explains process details
TimingAfter assessment, choose not to take additional control measuresAfter any treatment strategy is executed, formal sign-off on residual risk
RelationshipCan stand aloneMust exist after every treatment strategy is executed

The two can exist simultaneously: Choose "Risk Retention" (no treatment) → Residual risk equals inherent risk → Management executes "Risk Acceptance" for formal approval.

Hierarchical Relationship of Risk Appetite, Risk Capacity, and Risk Threshold

  • Risk Capacity: Objective upper limit, exceeding this line will jeopardize organizational survival (e.g., total loss of net assets).
  • Risk Appetite: Subjective willingness, how much risk the organization strategically chooses to bear, must be ≤ Risk Capacity.
  • Risk Threshold: Trigger level for specific risks, requiring immediate response when exceeded, usually lower than Risk Appetite.

Hierarchy from high to low: Risk Capacity ≥ Risk Appetite ≥ Risk Threshold.

Relationship between Inherent Risk and Residual Risk

  • No control measure can reduce risk to zero; residual risk must be formally accepted by management.
  • Secondary risk is a link often overlooked in risk management plans: when evaluating response measures, it is necessary to identify whether new risks are introduced simultaneously.

Shadow IT

Definition: Software, cloud services, or hardware devices adopted by employees or teams without approval from the IT department. With the popularity of SaaS tools and the rise of AI services, the scope of Shadow IT has expanded significantly.

Common Types:

TypeDescriptionTypical Example
Shadow SaaSEmployees privately use unapproved SaaS toolsPersonal Google Drive, Dropbox, Notion
Shadow CloudEngineers set up cloud resources on their own using personal or credit card accounts, bypassing IT procurement processesSelf-built AWS account, Azure subscription
Shadow AIEmployees input company data into unapproved AI toolsPasting customer data or source code into ChatGPT, using unvetted AI Coding Assistants
Shadow DataData is copied to unmanaged storage locations, forming untracked data copiesExporting databases to personal NAS, forwarding email attachments to personal mailboxes
Shadow HardwarePhysical devices not registered by IT connected to the enterprise networkPersonal NAS, Raspberry Pi, undeclared routers

Common Risks:

RiskDescription
Data BreachSensitive data enters unmanaged services; supplier terms may allow using data to train models (Shadow AI is particularly evident)
Compliance ViolationData storage locations may violate data sovereignty or GDPR regulations
Lack of Patch ManagementSoftware not included in enterprise patching processes may have known vulnerabilities
Audit Blind SpotUnable to track data flow and usage records, making it difficult to trace sources after an incident
  • Countermeasures: CASB to detect unauthorized cloud services, SaaS management platforms, regular cloud usage inventory; provide approved alternatives to reduce employee motivation to bypass (employees often use Shadow IT because official tools are inefficient).
  • Intersection of Shadow IT and BYOD: Employees install unapproved work applications on personal devices, touching on both issues simultaneously.

Incident Management

Information Security Incident Classification

ClassificationEnglishDescriptionExampleHandling Priority
Security EventSecurity EventState changes identified in systems or networksFirewall logs denied connectionLow (log only)
Security AlertSecurity AlertNotifications that may indicate a security eventIDS detects abnormal trafficMedium (needs analysis)
Security IncidentSecurity IncidentEvents confirmed to violate security policies or threatsMalware infection, data breachHigh (immediate response)
Major Security IncidentMajor IncidentIncidents causing significant impact on business operationsRansomware locking core systemsHighest (crisis management)

Containment Relationship of the Four

All incidents are events, but not all events are incidents. The four have a nested containment relationship, not a parallel classification:

Event ⊃ Alert ⊃ Incident ⊃ Major Incident

  • Event: Broadest scope; any observable system state change is an event, including massive amounts of normal logs.
  • Alert: Notifications filtered from events that require attention; still may be a False Positive.
  • Incident: Confirmed violation of security policy after analysis, upgraded to an incident, requiring formal response.
  • Major Incident: A subset of incidents that impact business operations, requiring the activation of crisis management processes.

Information Security Incident Severity Classification Table (Levels 1 to 4)

The higher the number, the more severe the security incident.

Item / Severity LevelLevel 1Level 2Level 3Level 4
Judgment Criteria (Core Logic)Non-core system interruption, no PII or confidential data leakageNon-core system paralysis, or involves general PII leakageCore system paralysis, data tampered with, or sensitive PII leakageNational security threatened, critical infrastructure large-scale shutdown
Notification Time LimitWithin 1 hourWithin 1 hourWithin 1 hourWithin 1 hour
Damage Control or Recovery OperationsWithin 72 hoursWithin 72 hoursWithin 36 hoursWithin 36 hours
Deadline for Submission of Investigation, Handling, and Improvement ReportWithin 1 monthWithin 1 monthWithin 1 monthWithin 1 month
Typical ExampleOfficial website defacement, general office computer virus infectionCore business briefly interrupted, general PII leakageMedical record leakage, official document leakage, core system collapsePower grid/water conservancy paralysis, military or diplomatic secret leakage

Taiwan Regulatory Time Limit Supplement

According to the current "Regulations on Security Incident Notification, Response, and Drills", all levels of incidents should continue to be investigated and handled after completing damage control or recovery, and an investigation, handling, and improvement report must be submitted within 1 month.

Security Incident Response (NIST SP 800-61)

NIST SP 800-61 Rev. 3 (April 2025) positions incident response as a component of CSF 2.0 risk management, no longer presented as an independent "Incident Handling Manual." IR activities span the six Functions of CSF 2.0, with Govern and Identify as the governance foundation, Protect responsible for preparation and defense, Detect, Respond, and Recover responsible for actual incident handling; continuous improvement spans the entire IR lifecycle, no longer occurring only after the fact.

Chinese NameCSF 2.0 FunctionCore Task
GovernanceGovernEstablish IR policy, role division, authorization, and resource allocation
IdentificationIdentifyIdentify critical assets, assess threat scenarios, establish IR trigger conditions
ProtectionProtectDeploy detection tools, establish CSIRT, formulate response plans and drills
DetectionDetectIdentify incidents through SIEM, IDS, and log monitoring, determine severity and impact scope
ResponseRespondContain affected systems, remove malicious programs and vulnerabilities, execute communication and coordination
RecoveryRecoverRestore systems and verify normal operation, write incident reports, update response plans

Rev. 2 (2012) Four-Stage Lifecycle

Rev. 2 (withdrawn in April 2025) defined the incident handling lifecycle in four stages:

StageChinese NameEnglish NameCore Task
1PreparationPreparationEstablish CSIRT, formulate response plans, deploy detection tools, conduct education, training, and drills
2Detection & AnalysisDetection & AnalysisIdentify incidents through SIEM, IDS, and log monitoring, determine incident severity and impact scope
3Containment, Eradication & RecoveryContainment, Eradication & RecoveryIsolate affected systems (containment) → Remove malicious programs and vulnerabilities (eradication) → Restore systems and verify normal operation (recovery)
4Post-Incident ActivityPost-Incident ActivityWrite incident reports, review improvement measures, update response plans, preserve evidence (for legal or audit purposes)

💡 Terminology Quick Check

  • CSIRT: Computer Security Incident Response Team
  • SIEM: Security Information and Event Management
  • IDS: Intrusion Detection System

Incident Handling Priority

  • Containment is the first priority: Isolate infected systems first to prevent disaster expansion, then proceed with eradication and recovery.
  • "Lessons Learned" in post-incident activities is the key to continuous improvement; every incident should produce improvement suggestions fed back to the preparation stage.

Differences between CSIRT, CERT, and SOC

OrganizationFull NamePositioningScope of Responsibility
CSIRTComputer Security Incident Response TeamIncident response teamInternal security incident detection, analysis, containment, and recovery for the organization; can be permanent or ad-hoc
CERTComputer Emergency Response TeamEmergency response centerNational or regional level, provides cross-organizational security incident coordination and early warning (e.g., TWCERT/CC)
SOCSecurity Operations CenterSecurity Operations Center24/7 continuous monitoring, handles daily alerts and incident classification; CSIRT handles incidents escalated by SOC
  • CERT often acts as a national-level coordination center (e.g., Taiwan's TWCERT/CC, US-CERT), while CSIRT tends to be an internal team for an organization.
  • SOC focuses on daily monitoring and initial alert screening, while CSIRT focuses on in-depth incident investigation and response. The two often work together: SOC detection → escalation to CSIRT for handling.

MTTD / MTTA / MTTR Incident Response Indicators

IndicatorFull NameDefinitionCalculation Method
MTTDMean Time To DetectAverage time from when a threat actually occurs to when the organization detects the incidentDetection Time − Incident Occurrence Time
MTTAMean Time To AcknowledgeAverage time from when an alert is triggered to when an analyst confirms taking overTakeover Time − Alert Trigger Time
MTTRMean Time To RespondAverage time from when an incident is detected to when the response (containment/eradication/recovery) is completedResponse Completion Time − Detection Time

Timeline order: Incident Occurrence → MTTD → Detection Confirmation/Alert Trigger → MTTA → Analyst Takeover → MTTR → Resolution

  • Shortening MTTD is more critical than shortening MTTR: The longer an attacker lurks inside the organization (Dwell Time), the greater the scope of lateral movement and data theft.
  • Industry reference: According to reports, the global average Dwell Time has shortened from hundreds of days to about 10 days, but there is still room for improvement.
  • Practices to improve MTTD: SIEM correlation analysis, EDR (Endpoint Detection and Response), Threat Hunting.
  • Definition in these notes: MTTR start point is detection time, so MTTR covers MTTA (MTTA is a sub-interval of MTTR).
  • Another common definition: Some organizations define the MTTR start point as the analyst takeover time, in which case MTTA and MTTR are non-overlapping continuous intervals, Total Response Time = MTTA + MTTR.

Post-Incident Report

ElementDescription
Incident TimelineComplete record of every time node from detection, notification, containment to recovery
Root Cause AnalysisTrace the root cause of the incident, rather than just describing surface symptoms
Impact ScopeAffected systems, number of data records, business interruption time
Corrective ActionsPatching actions taken (e.g., patching vulnerabilities, blocking accounts, updating rules)
Prevention RecommendationsLong-term improvement measures to prevent recurrence of similar incidents

Cross-Platform Log Management and Forensic Path

AspectWindowsLinux
System LogsEvent Viewer: Application, Security, System channels/var/log/syslog (Debian family) or /var/log/messages (RHEL family); systemd uses journalctl
Security/Auth LogsSecurity event records (Login Success 4624, Login Failure 4625, Account Creation 4720)/var/log/auth.log (Debian) or /var/log/secure (RHEL)
Web Server LogsIIS: C:\inetpub\logs\LogFiles\Apache: /var/log/apache2/; Nginx: /var/log/nginx/
Firewall LogsWindows Firewall: Event Viewer Windows Firewall With Advanced Securityiptables: /var/log/kern.log; nftables: journalctl -k
Centralized ManagementWindows Event Forwarding (WEF)rsyslog / syslog-ng remote forwarding to SIEM
Log RetentionGPO sets Event Log size and overwrite policylogrotate sets rotation and retention days
Key EventsLogin Failure (4625), Permission Change (4672), Object Access (4663)/var/log/auth.log, auditd rules
Anti-TamperingForward to SIEM and set to read-onlyRemote Syslog + chattr +a (append-only)
Windows / Linux Log Query Command Examples
powershell
# Windows: Query login failure events (4625) from the last day
Get-WinEvent -FilterHashtable @{
  LogName = 'Security'
  Id = 4625
  StartTime = (Get-Date).AddDays(-1)
} -MaxEvents 20

# Windows: Query Security log directly using wevtutil, newest events first
wevtutil qe Security /c:20 /rd:true /f:text
bash
# Linux: View systemd journal from the last hour
sudo journalctl --since "1 hour ago"

# Linux: View logs from the previous boot cycle
sudo journalctl -b -1

# Linux: Set file to append-only to reduce risk of log overwrite
sudo chattr +a /var/log/auth.log
  • Get-WinEvent -FilterHashtable is suitable for precise event filtering by LogName, Id, StartTime.
  • wevtutil qe is suitable for quick Event Log queries or exports.
  • journalctl -b -1 is commonly used to compare if anomalies appeared "before this boot."
  • chattr +a is a common practice for ext file systems, meaning the file can only be appended, not overwritten or deleted.

Log Management Key Comparison Table

AspectKey Description
Log PurposeRecord user activities and abnormal events as post-incident tracking and legal evidence; protect Non-repudiation
Syslog ProtocolDefaults to UDP port 514 (unreliable, packets may be lost during high traffic); switch to TCP to ensure reliable delivery; use TLS (Syslog over TLS) for encrypted transmission
Centralized ManagementImport logs from multiple devices/systems into SIEM for unified query and analysis
NormalizationLog formats (field names, time formats) from different devices are inconsistent; must unify formats before centralized analysis
Time Synchronization (NTP)Device clocks must be synchronized with a standard time source to ensure correct timing of cross-device logs, the foundation for audit and judicial evidence
Protection and RetentionLogs must not be arbitrarily modified by administrators; retention periods should meet regulatory or policy requirements

Syslog Severity Levels

LevelValueKeywordDescription
Emergency0EMERGSystem unusable (e.g., kernel panic)
Alert1ALERTAction must be taken immediately (e.g., primary database unresponsive)
Critical2CRITCritical conditions (e.g., hardware failure)
Error3ERRError conditions, needs investigation
Warning4WARNINGWarning conditions, not errors but noteworthy anomalies
Notice5NOTICENormal but significant conditions
Informational6INFOInformational messages
Debug7DEBUGDebug-level messages, usually disabled in production

Common Log Formats

FormatDescription
Syslog (RFC 5424)Linux/Unix standard log protocol, includes Priority, Timestamp, Hostname, App-Name, Message structure
JSONStructured logs, convenient for parsing and querying by tools like ELK (Elasticsearch + Logstash + Kibana)
CEF (Common Event Format)Defined by HP ArcSight, a semi-structured format widely supported by SIEM, fixed fields, easy cross-system integration
LEEF (Log Event Extended Format)Structured format used by IBM QRadar, similar to CEF but field definitions differ slightly
W3C Extended Log FormatWeb server log standard, default for IIS

Easily Confused Concepts

  • Retaining logs protects Non-repudiation, not confidentiality or availability.
  • Syslog runs on UDP; packets may be lost during high traffic, a design characteristic of UDP without retransmission.
  • Log formats differ across devices, requiring Normalization, do not confuse with De-identification.

Log Management Best Practices

  • Should record: Login success/failure, permission changes, data access, system errors, management operations.
  • Should not record: Passwords (including hash values), credit card numbers, PII (ID numbers, medical records), to avoid logs themselves becoming a source of data leakage.
  • Integrity protection: Logs should be Write Once Read Many (WORM) after being sent to prevent attackers from tampering with logs to cover intrusion traces.
  • Centralization: Centralize logs to SIEM or log platforms; logs scattered across hosts are difficult to correlate and analyze.
  • Retention period: Determined by regulations and policies (e.g., PCI DSS requires retention for at least one year, with the last three months immediately accessible).
  • Alert settings: Set real-time alerts for high-risk events (multiple login failures, privileged account operations, access outside business hours).

Digital Forensics: Order of Volatility

When collecting digital evidence during incident response, evidence should be collected from the most easily lost (most volatile) to the most stable. This principle originates from RFC 3227.

OrderData SourceVolatility
1CPU registers, CPU cacheHighest (lost when power is off)
2Main memory (RAM)Extremely high (lost when powered off)
3Running processes, network connection status, routing tablesHigh
4Temporary files, paging file (Paging File / Swap)Medium-High
5Hard disk dataMedium (non-volatile, but may be overwritten by malware)
6Remote logs, SIEM dataLow
7Archival media (tape, backup discs)Lowest
  • Live Forensics: Capture memory image (Memory Dump) without shutting down, then perform disk image (Disk Image).
  • Any operation on the system (e.g., executing tool programs) may change RAM content; assess carefully before evidence collection.
  • Write Blocker: Must avoid any writing to the target media during evidence collection. Common practices:
    • Hardware Write Blocker: External device, intercepts all write commands at the hardware level, connected between the target disk and the forensic machine; preferred choice.
    • Forensic Boot Environment: Boot from USB into Kali Forensic mode or WinPE forensic environment, do not mount local disks to avoid OS contamination.
    • Image to External Media: Use dd, FTK Imager, dcfldd to read the source and write the image to external media; the original disk is only read.

Disk Imaging Tool Comparison

ToolPlatformCharacteristics
ddLinux / macOSBuilt-in, bit-for-bit copy, no progress display, no built-in hash, syntax errors may overwrite source
dcflddLinuxForensic-enhanced version of dd (developed by US DoD Computer Forensics Laboratory), supports calculating hash while copying, progress display, synchronous writing to multiple targets
FTK ImagerWindows (GUI)From AccessData, can output E01 (Expert Witness) or AD1 format, built-in hash verification, supports preview while copying

Storage Media Space Structure and Hidden Data

After obtaining a disk image, investigators will look for hidden or residual data in the following areas in addition to existing files:

AreaDescriptionForensic Significance
Slack SpaceOS allocates storage in fixed-size clusters. If file size is not an integer multiple of the cluster, the space between the "actual file end" and "cluster end" is Slack SpaceMay contain residual data (content of the previous file that used the cluster), even if the original file is deleted or overwritten, fragments in Slack Space can still be read
Unallocated SpaceSectors in the file system not occupied by any existing files. After a file is deleted, the OS only marks the cluster as "available" and does not immediately clear the dataData can still be restored by forensic tools (e.g., Autopsy, FTK) before being overwritten by new files, commonly used to recover malicious programs or logs deleted by attackers

Windows Execution Trace Forensics (Registry and System Files)

Even if the program has been deleted, the following registry keys will retain execution records:

LocationDescription
Shimcache (AppCompatCache)HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\AppCompatCache, records program paths and timestamps that have been executed on the system (updated after reboot)
Amcache.hveC:\Windows\appcompat\Programs\Amcache.hve, records hashes, paths, and first execution time of installed or executed applications
Prefetch (.pf)C:\Windows\Prefetch\, records DLLs loaded when a program is first executed, retains execution count and last 8 execution times
MFT (Master File Table)The core metadata table of NTFS, records attributes of every file (size, timestamp, data location); even if the file is deleted, MFT records can still be read by forensic tools. Records the "current" state of the file, does not retain historical change sequences.
$UsnJrnl (Update Sequence Number Journal)Records the sequence of all file and directory change events (creation, deletion, renaming, data changes, encryption, etc.) within the volume, each record has a USN sequence number and timestamp. Even if the target file has been deleted, log records still exist, making it the best source for tracking ransomware encryption behavior trajectories.
$LogFile (NTFS Transaction Log)Records metadata changes that are about to be or have been completed to restore MFT consistency after a system crash. Content is in units of transactions, can be used in forensics to restore recent MFT changes, but retention time is short (circular overwrite).
$BitmapRecords the allocation status of each cluster (0 = unused, 1 = allocated). Used in forensics to identify the scope of Unallocated Space, confirming which sectors may contain deleted data residuals.
  • Shimcache / Amcache forensic value: Can prove that a specific executable once existed on the system and was executed, even if the attacker has deleted the file afterwards.
  • Prefetch is disabled by default on Windows Server (to improve I/O performance), and ransomware often disables Prefetch for anti-forensics.

Linux Digital Forensics Common Paths

/proc/[pid]/ is a virtual directory in Linux procfs that reflects the runtime state of a specific process. Common forensic paths are as follows:

PathContentForensic Use
/proc/[pid]/mapsVirtual memory layout of the process, including mapped file paths and dynamic link libraries (.so)Confirm which shared libraries the process loaded, can discover maliciously injected .so
/proc/[pid]/cmdlineCommand line arguments used when starting the processView process startup parameters, but cannot be read for finished processes
/proc/[pid]/environEnvironment variables inherited by the processFind hardcoded keys or settings
/proc/[pid]/statusProcess status (PID, PPID, UID, GID, etc.)Confirm parent-child relationship and execution identity of the process
/proc/[pid]/fd/All file descriptors currently opened by the processView files or network connections being read/written by the process
~/.bash_historyShell history command records, written upon logoutRestore attacker operation sequences; attackers often clear with history -c or unset HISTFILE

Chain of Custody

Records the complete custody history of digital evidence from collection to court, ensuring the integrity and legal validity of the evidence. Each transfer or access must be recorded:

ItemDescription
WhoIdentity of the person obtaining, transferring, or accessing
WhenPrecise timestamp
WhereLocation or storage position
WhatActions performed (e.g., creating disk image, transferring to forensic personnel)

Once the chain of custody is broken (e.g., evidence was left unattended or access was not recorded), the court may not accept the evidence.

Hash Integrity Verification: When collecting evidence, immediately calculate a hash (usually SHA-256) for both the original media and the image, and record it in the chain of custody document. All subsequent forensic operations are performed on copies, and the hash is re-verified before each start. If the hash matches, it can prove to the court that the copy is bit-for-bit identical to the original media and has not been tampered with.

PlatformCommand
Linuxsha256sum disk.img
Windowscertutil -hashfile disk.img SHA256
Linux Log Search and Forensic Commands
bash
# Use journalctl to search for SSH events in a specific time range
journalctl --since "2025-01-01 00:00:00" --until "2025-01-02 00:00:00" -u sshd

# Search for SSH brute force (login failure records)
grep "Failed password" /var/log/auth.log | tail -20

# Count SSH login failure source IPs (sorted by frequency)
grep "Failed password" /var/log/auth.log \
  | awk '{print $(NF-3)}' | sort | uniq -c | sort -rn | head -10

# Memory Dump — Using LiME kernel module
sudo insmod /opt/lime/lime-$(uname -r).ko "path=/evidence/mem.lime format=lime"

# Use dd to create bit-for-bit disk image
sudo dd if=/dev/sda of=/evidence/disk.img bs=4M status=progress

# Calculate image file hash (ensure evidence integrity, maintain Chain of Custody)
sha256sum /evidence/disk.img > /evidence/disk.img.sha256

Security Considerations for Log Recording

  • Prohibit recording sensitive data: Passwords, credit card numbers, personal ID numbers, etc., must not appear in logs.
  • Log integrity protection: Attackers often clear logs after intrusion to cover their tracks; logs should be forwarded in real-time to an independent SIEM or centralized log server.
  • Time synchronization: All systems should synchronize clocks via NTP to ensure the timeline of cross-system logs can be correctly correlated.
  • Log retention period: Determined by regulatory requirements (e.g., "Cyber Security Management Act" requires retention for at least 6 months) and business needs.

Memory Forensics

Memory (RAM) is the most volatile source of evidence, containing running processes, network connections, decrypted keys, and malicious code; this information is permanently lost after shutdown. Volatility is the most mainstream open-source memory forensics framework, capable of analyzing memory images (Memory Dump) from Windows / Linux / macOS.

Core Analysis Objectives:

Analysis ObjectVolatility Common Commands (v3)Forensic Significance
Running processeswindows.pslist / windows.pstreeList all processes and parent-child relationships, find malicious programs disguised as system processes (e.g., fake svchost.exe)
Network connectionswindows.netstatList all TCP/UDP connections and corresponding processes, find abnormal C2 communication connections
Injected malicious codewindows.malfindScan memory segments with "executable + readable/writable (RWX)" protection attributes and no corresponding disk file
DLL listwindows.dlllistList all DLLs loaded by the process, discover abnormally injected DLLs
Registry Hivewindows.hivelistList memory addresses of loaded Registry Hives, can further dump SAM / SYSTEM and other Hives

VAD (Virtual Address Descriptor) tree structure:

The Windows kernel records the memory segment configuration status and protection attributes (e.g., PAGE_EXECUTE_READWRITE) of each process in a VAD tree. When malicious code injection (Process Injection, Process Hollowing) occurs, a memory segment with RWX (read/write/execute) permissions is often allocated first, and then Shellcode is written; at this time, this memory segment exists in the VAD but has no corresponding physical disk file, which is a very obvious abnormal feature in the VAD tree and the main detection basis for windows.malfind.

Common Methods for Obtaining Memory Images

Operating SystemToolDescription
WindowsWinPmem, DumpIt, FTK ImagerCommercial or open-source tools, use kernel drivers to read physical memory and output in raw / lime format
LinuxLiME (Linux Memory Extractor)Kernel module, loaded via insmod to dump memory to local or transmit over the network
Virtual MachineHypervisor SnapshotVM snapshot directly contains memory state, easiest to obtain and cleanest during forensics

Business Continuity and Disaster Recovery

Backup Type Comparison Table

Backup TypeEnglishBackup ScopeBackup TimeRestore StepsStorage Space
Full BackupFull BackupAll data, regardless of whether it changed since the last backupLongest1 (Full is enough)Largest
Differential BackupDifferential BackupAll data changed since the last Full BackupIncreasing (larger later)2 (Latest Full + Latest Diff)Medium
Incremental BackupIncremental BackupData added or changed since the last any backupShortestMultiple (Latest Full + all subsequent Inc)Smallest

Backup Strategy Trade-offs

  • Full Backup = Slowest backup, simplest restore.
  • Differential Backup = Space is larger than incremental, but restore only needs two copies (Full + Latest Diff).
  • Incremental Backup = Fastest backup each time, but restore requires chaining multiple copies, the most complex process.
  • Common strategy in practice: Full on Sunday, Incremental (or Differential) Monday through Saturday.
  • RPO (Recovery Point Objective): The maximum time range of data loss allowed, determined directly by the backup cycle. The higher the backup frequency, the shorter the RPO; Full backup combined with hourly Transaction Log can compress RPO to within 1 hour.
  • 3-2-1 Backup Rule: At least 3 copies, stored on 2 different media, 1 copy off-site. This is the industry-recognized minimum standard.
  • WORM (Write Once Read Many) Storage: Data cannot be modified or deleted within the specified retention period after being written, effectively resisting ransomware encryption or attackers deleting backups. Cloud services (e.g., Azure Immutable Blob Storage) support WORM mode, an important part of modern backup strategies.

SQL Server Backup Mapping (Industry practical example):

General Backup TypeSQL Server MappingSQL Server Characteristics
Full BackupFull BackupBacks up the entire database, including data and part of transaction logs, can be restored independently
Differential BackupDifferential BackupBacks up data pages (Data Extent) changed since the last Full, requires Full + Latest Diff for restore
Incremental BackupTransaction Log BackupBacks up transaction logs since the last log backup, requires Full + all subsequent Logs applied in order for restore
  • SQL Server's Transaction Log Backup concept is equivalent to incremental backup, but records "transactions" rather than "changed files."
  • SQL Server also has "Filegroup Backup": Backs up only specific filegroups, suitable for ultra-large databases to avoid full database backups every time.

Cross-Platform Backup Tool Comparison

AspectWindowsLinux
Built-in Backup Toolwbadmin (Windows Server Backup), File Historyrsync, tar, cp
Enterprise SolutionSQL Server Backup, Azure BackupBacula, BorgBackup, Restic
SnapshotVSS (Volume Shadow Copy Service)LVM Snapshot, Btrfs/ZFS Snapshot
SchedulingTask Schedulercron / systemd timer
Off-site BackupAzure Blob Storage, AWS S3rsync + SSH, rclone (supports multi-cloud)
Linux Backup Command Examples
bash
# rsync incremental backup (only transfers changed files, suitable for off-site backup)
rsync -avz --delete /var/www/ user@backup-server:/backup/www/

# tar full backup (compresses entire directory into a single file)
tar -czf /backup/full-$(date +%Y%m%d).tar.gz /var/www/

# Schedule daily incremental backup at 2 AM
# crontab -e
# 0 2 * * * rsync -avz --delete /var/www/ user@backup-server:/backup/www/ >> /var/log/backup.log 2>&1

# find to clean up backup files older than 30 days
find /backup/ -name "full-*.tar.gz" -mtime +30 -delete
Windows Backup Command Examples
powershell
# wbadmin execute full backup to network share
wbadmin start backup -backupTarget:\\BackupServer\Share -include:C: -quiet

# Use robocopy to mirror sync folder (/MIR will delete extra files in destination)
robocopy C:\Data \\BackupServer\Data /MIR /Z /LOG:C:\Logs\backup.log

# Create VSS snapshot (Volume Shadow Copy)
vssadmin create shadow /for=C:

RAID (Redundant Array of Independent Disks) Comparison Table

RAID LevelTechnical PrincipleMin DisksAllowed FaultsRead PerformanceWrite PerformanceAvailable SpaceApplicable Scenario
RAID 0Striping, no redundancy. Data is split and written alternately to each disk, all disk space is merged into a single logical disk20 (any disk failure = total data loss)Highest (parallel read)Highest100% (sum of all disk capacities)Video editing scratch space, scenarios needing high performance but not fault tolerance
RAID 1Mirroring, data is completely copied to each disk2n-1 (as long as 1 remains)High (dual-track read)No improvement (must write two copies simultaneously)50%OS disk, scenarios valuing data security
RAID 5Striping + Distributed Parity, parity is rotated across different disks31 (parity can rebuild one disk)HighDecreased (needs Parity calculation)(n-1)/nNAS (Network Attached Storage), common choice balancing performance and fault tolerance

RAID 5 Parity Distribution Illustration (3 disks)

StripeDisk 0Disk 1Disk 2
1A1A2P(A)
2B1P(B)B2
3P(C)C1C2

P(X) = Parity block for that stripe (calculated via XOR). Parity is rotated across different disks for each stripe to avoid parity becoming a bottleneck on a single disk. When any disk fails, the lost block can be calculated back using remaining data + parity (e.g., Disk 1 fails → A2 = A1 XOR P(A)).

Common RAID Misconceptions and Practical Configurations

  • RAID is not backup: RAID only protects against hard disk failure and cannot prevent ransomware encryption, accidental deletion, fire, etc.
  • Common industry combinations:
    • RAID 10 (1+0) = RAID 1 Mirroring + RAID 0 Striping. Balances performance and security, a common choice for database servers (e.g., SQL Server recommends RAID 10 for data files).
    • RAID 50 (5+0) = RAID 5 + RAID 0. Commonly used in large storage systems.
  • Enterprise-grade storage usually comes with Hot Spare disks, which rebuild automatically upon failure.

RAID 5 Intuitive Analogy

Imagine 3 notebooks to copy a story:

  • Notebook A writes paragraph 1, paragraph 2.
  • Notebook B writes paragraph 1, paragraph 3.
  • Notebook C writes paragraph 2, paragraph 3.

Each notebook rotates responsibility for writing a "summary page" (parity), recording the checksum of the other two. If any one is lost, the content can be restored from the content and summary page of the other two.

  • RAID 6 (Dual Parity): Equivalent to writing two summary pages per stripe, distributed on two different disks, so it can tolerate 2 disk failures simultaneously. Requires at least 4 disks.
  • RAID 5 writes only one parity, tolerating 1 failure; RAID 6 writes two parities, tolerating 2 failures, but write performance is lower (needs two Parity calculations).

Recovery Site Type Comparison Table (Hot / Warm / Cold)

TypeEnglishEquipment StatusData TimelinessActivation TimeCost
Hot SiteHot SiteComplete equipment identical to main site, continuously operatingReal-time sync (or extremely short delay)Within minutes (almost immediate switch)Highest
Warm SiteWarm SiteEquipment ready but standby, requires manual startup and data syncPeriodic backup (hours to a day)Hours to daysMedium
Cold SiteCold SiteOnly site, power, and basic infrastructure, no equipmentRequires restore from backup mediaDays to weeksLowest

RTO / WRT / MTPD / RPO

  • RTO (Recovery Time Objective): The maximum time service interruption is allowed after a system shutdown. Hot Site RTO is shortest; Cold Site RTO is longest.
  • WRT (Work Recovery Time): The additional time required to verify data integrity, tune settings, and restore normal operations after the system is restored (RTO achieved). The actual interruption time felt by users = RTO + WRT.
  • MTPD (Maximum Tolerable Period of Disruption): The absolute upper limit of interruption time the organization can bear. Exceeding this time will cause unacceptable business damage. MTPD ≥ RTO + WRT.
  • RPO (Recovery Point Objective): The maximum time range of data loss allowed after a disaster, determined by the backup cycle. Hot Site (real-time sync) RPO is close to 0; Cold Site (periodic backup) RPO could be days ago.
  • Risk Appetite drives goal setting: High risk appetite (can accept longer interruption) → Allows longer RTO/RPO → Can choose lower-cost backup solutions.
  • RPO looks at the "past" (how much data can be lost), RTO + WRT looks at the "future" (how fast to recover), MTPD is the hard deadline.
  • Industry Practice: Core systems in the financial industry usually require RTO < 4 hours, RPO < 1 hour, so they mostly adopt Hot Site + real-time data sync (e.g., SQL Server Always On Availability Group).
  • General enterprise ERP systems can accept RTO < 24 hours, Warm Site combined with daily differential backup is sufficient.

Data Storage Tiering (Hot / Warm / Cold)

Allocate data to different cost storage tiers based on access frequency to achieve a balance between performance and cost.

TierCharacteristicsAzure Blob MappingTypical Data
Hot StorageHigh-frequency access, low latency, high storage costHot tierRecent transaction records, current month reports
Warm StorageLow-frequency access, lower storage cost, retrieval fee appliesCool tierHistorical data from the past 1–3 years
Cold StorageArchival, retrieval takes hours, lowest costArchive tierOld data retained for regulatory compliance, archives 3+ years old

Data can be automatically tiered over time (e.g., Azure Blob Lifecycle Management) or moved manually.

Relational Database Tiering Implementation

Relational databases (e.g., SQL Server) do not have native tiering management, usually simulated in the following ways:

  • Partition + Filegroup: Recent partitions go to SSD filegroups (Hot), old data partitions go to HDD filegroups (Cold), data is moved via Partition Switching after aging, no need for row-by-row copying.
  • Archive Database: Batch move data exceeding the retention period to an independent historical database, keep the main database lean, historical database is for query only.

Elasticsearch has native ILM (Index Lifecycle Management): Indices can be set to automatically move from Hot nodes (SSD, responsible for write and real-time query) to Warm nodes (HDD, read-only) and then to Cold/Frozen nodes, executed automatically by the engine without manual intervention.

Data Storage Tiering vs Recovery Site: Same Name, Different Meaning

"Hot/Warm/Cold" in recovery sites refers to backup readiness (whether equipment is ready, how fast to switch); in data storage tiering, it refers to the trade-off between access frequency and cost. The two sets of terms have the same name but independent concepts.

BCP Test Type Comparison Table

Test TypeEnglishDescriptionRiskCost
Tabletop ExerciseTabletop ExerciseSimulate disaster scenarios through meeting discussions, step-by-step review of plan stepsNone (discussion only)Lowest
Structured Walk-ThroughStructured Walk-ThroughRepresentatives from various departments jointly review plan content item by item, confirming roles and proceduresNoneLow
Simulation TestSimulation TestSimulate real disaster scenarios (e.g., simulate server room fire), execute notification and response processes, but do not actually switch systemsLowMedium
Parallel TestParallel TestBackup system actually starts and processes business, runs in parallel with the main system to compare resultsLow (main system not interrupted)Medium-High
Full Interruption TestFull Interruption TestMain system completely shut down, all business switched to backup siteHighest (if backup fails, business is impacted)Highest

TIP

  • Parallel Test: Backup system actually processes transactions while the main system is still online, verifying backup capability without affecting formal business.
  • Full Interruption Test is the only way to verify "true switching capability," but the risk is highest, usually executed only after management approval.
  • Test frequency recommendation: Tabletop exercise once every six months, parallel/interruption test at least once a year.

Relationship between BCP and DRP

BCP (Business Continuity Plan) covers the complete strategy for an organization to maintain critical business operations during a disaster; DRP (Disaster Recovery Plan) is a subset of BCP, focusing on the technical recovery of IT systems and data. Both are formulated based on BIA analysis results.

AspectBCPDRP
ScopeEntire organization (business processes, personnel, communication, IT)IT infrastructure (servers, network, data)
GoalMaintain minimum business operations during a disasterRestore IT systems to normal operating state
Key InputBIA (Business Impact Analysis)RTO / RPO goals from BIA, critical system list, backup architecture
Responsible RoleSenior management, business ownersIT department, security team

BIA (Business Impact Analysis) is executed before BCP/DRP formulation and is the common foundation for both. By analyzing the impact of interruption on various business processes, critical business functions are identified, and RTO, RPO, and backup priorities are set accordingly, serving as input for BCP strategy planning and DRP technical design.

Physical Security

Physical Security Control Measures Comparison Table

Control MeasureEnglishDescription
TailgatingTailgatingUnauthorized personnel closely follow authorized personnel through access control without independent card verification
PiggybackingPiggybackingAuthorized personnel knowingly and actively bring in unauthorized personnel (e.g., "helping open the door")
MantrapMantrap / AirlockDouble-door interlocking design, the second door can only be opened after the first door is closed, forcing individual verification
CCTVCCTVClosed-circuit television monitoring, providing real-time monitoring and post-incident review capabilities
Visitor ManagementVisitor ManagementVisitor registration, wearing ID badges, full-process escort, badge retrieval upon departure
Environmental ControlEnvironmental ControlsTemperature and humidity monitoring, fire suppression systems (FM-200 gas fire suppression), UPS, backup generators
Cable ManagementCable ManagementNetwork line labeling and physical isolation to prevent unauthorized connection or eavesdropping

Tailgating vs Piggybacking

  • Tailgating: The person being followed is unaware, a passive security vulnerability.
  • Piggybacking: The person being followed is aware and actively cooperates, a behavior violating security policy.
  • Countermeasure: Mantrap can prevent both simultaneously because everyone must be verified independently.
  • Common defense combination for server rooms: Access card + Mantrap + CCTV + Visitor registration system.

Environmental Control and Fire Suppression Systems

Control ItemEnglishDescription
HVACHVAC (Heating, Ventilation, and Air Conditioning)Server rooms must maintain 18–27°C and 40–60% relative humidity. Overheating leads to equipment failure, excessive humidity leads to condensation damaging circuits.
Wet PipeWet PipePipes are pre-filled with water, spraying immediately upon fire detection. Fastest reaction but damages electronic equipment, not suitable for server rooms.
Dry PipeDry PipePipes are filled with compressed air, injecting water only when triggered. Suitable for low-temperature environments (anti-freeze), still sprays water, not suitable for precision equipment areas.
Pre-ActionPre-ActionCombines smoke and temperature dual detection, injecting water only when both are triggered. Reduces risk of accidental spraying, suitable for areas surrounding data centers.
Clean AgentClean AgentUses inert gas or chemical gas (e.g., FM-200, Novec 1230, CO₂) instead of water. Non-conductive, leaves no residue, first choice for server rooms. CO₂ has suffocation risk, must be paired with personnel evacuation alarms.

Fire Suppression System Selection Basis

  • Office: Wet Pipe, lowest cost, fastest reaction.
  • Server Room Perimeter (Occupied Areas): Pre-Action, dual confirmation reduces accidental spraying.
  • Server Room Core (Server Area): Clean Agent (FM-200 / Novec 1230), does not damage equipment.
  • Unoccupied Enclosed Spaces: CO₂, strongest fire suppression effect but displaces oxygen, personnel must evacuate first.

TEMPEST (Telecommunications Electronics Material Protected from Emanating Spurious Transmissions)

TEMPEST is the electromagnetic leakage protection standard defined by the US NSA. Electronic equipment generates electromagnetic radiation (Emanation) during operation; attackers can capture signals from a distance using high-sensitivity receivers to reconstruct screen images or keyboard input.

Protection MeasureDescription
Faraday CageEnclose space with conductive materials (metal mesh, metal plates) to block electromagnetic waves from entering or leaving. Server room level requires integration of walls, floors, and ceilings, and all incoming/outgoing lines must pass through filters.
TEMPEST Certified EquipmentEquipment itself reduces electromagnetic radiation from the design stage, meeting NSA certification standards (e.g., NSTISSAM TEMPEST/1-92).
Distance ControlLimit the distance between sensitive equipment and building exterior walls (Zone control), utilizing the characteristic that electromagnetic signals attenuate with distance.
White Noise GeneratorEmit random electromagnetic signals to cover equipment radiation, interfering with the attacker's signal capture.

TIP

  • TEMPEST guards against Passive Eavesdropping; attackers do not need to contact the target equipment.
  • Daily applications of Faraday cages: microwave oven shells, mobile phone signal shielding bags, MRI rooms.
  • Paired with cable management: Fiber optics do not radiate electromagnetic waves and are the preferred transmission medium for high-security environments.

Cable Management Practices

ItemDescription
Structured CablingFollow TIA/EIA-568 standard, distinguish between Horizontal Cabling and Backbone Cabling.
Labeling SystemLabel both ends of every line with a unique identifier, corresponding to the Patch Panel Diagram, ensuring traceability.
Physical IsolationSeparate power lines and data lines in different cable trays to avoid EMI (Electromagnetic Interference). Manage copper cables and fiber optics separately.
Locked Wiring ClosetWiring closets (IDF, Intermediate Distribution Frame) should be locked and controlled to prevent unauthorized connection or eavesdropping.
Fiber Optic Anti-EavesdroppingFiber optics do not radiate electromagnetic waves; eavesdropping requires physical bending of the fiber (Fiber Tapping), which can be detected by optical power meters detecting abnormal attenuation.

Media Sanitization

When devices are scrapped or replaced, formatting or deleting files cannot completely clear data; corresponding destruction methods must be chosen according to the media type:

MethodHDD (Traditional Magnetic Hard Disk)SSD / Flash Storage
Degaussing✅ Effective (destroys magnetic records)❌ Ineffective (Flash does not rely on magnetism, degaussing cannot clear data)
Overwrite / Wipe✅ Effective (still a common Clear method for HDD, but should be executed according to current standards and organizational procedures)⚠️ Partially effective (wear leveling may retain old data blocks, requires support from manufacturer's secure sanitization commands)
ATA Secure Erase✅ Effective✅ Effective (if supported by manufacturer, controller ensures complete clearing)
Cryptographic Erase✅ Effective✅ Effective (encrypt all data first, then destroy the key, data becomes unreadable, recommended method for SSD)
Physical Destruction✅ Most thorough✅ Most thorough (NSA requires grinding to ≤ 2mm fragments)

NIST SP 800-88 Rev. 2 "Guidelines for Media Sanitization" (September 2025) divides media sanitization into three levels:

LevelDescriptionApplicable Scenario
ClearLogical sanitization (overwrite), no special equipment requiredGeneral confidential data, reuse within the organization
PurgeDegaussing, ATA Secure Erase, NVMe Sanitize, or compliant cryptographic erase; can resist laboratory-level recovery attacksSensitive data, device moves outside organizational control
DestroyPhysical destruction, ensuring data is completely unrecoverableTop secret level, media must be completely scrapped
  • Conditions for Cryptographic Erase to reach Purge level: Encryption enabled from device configuration, uses NIST-approved algorithms like AES-256, key destruction is verifiable; if any condition is not met, it only reaches Clear level.

Common Misconceptions

  • SSDs must not rely on degaussing; cryptographic erase or physical destruction should be prioritized.
  • "Formatting" ≠ "Sanitization": Formatting only deletes file indices, the data itself remains residual and can be restored by forensic tools.

SP 800-88 Rev. 1 (2014) Background

  • Rev. 1 directly listed technical operation details (e.g., "HDD overwrite with all zeros at least once"), which have been widely cited for years.
  • Rev. 2 shifts from an "operation manual" to a "framework for establishing organizational-level sanitization programs," with most technical details now referencing IEEE 2883, NSA specifications, or organization-approved standards.
  • Rev. 2 adds NVMe-specific guidance and explicitly defines the conditions for cryptographic erase to reach the Purge level (resolving the ambiguity of Rev. 1).

Network Security

Network Attack Four Basic Types Comparison Table

Corresponding to the CIA triad, network attacks can be divided into four basic types:

TypeDamaged AttributeDescriptionTypical Technique
InterruptionAvailabilityData is cut off during transmission and cannot reach the destinationDDoS, cutting physical lines
InterceptionConfidentialityIntercepted and eavesdropped during transmission, content leaked but transmission is unaffectedPacket Sniffing, Man-in-the-Middle eavesdropping
ModificationIntegrityData is modified by an unauthorized party before being sent, the receiver is unawareMITM packet tampering, SQL Injection
FabricationAuthenticityForging identity or data, pretending to be someone else to sendIP Spoofing, phishing emails

TIP

  • Interruption → You don't receive it (Availability destroyed).
  • Interception → You received it, but the content was seen (Confidentiality destroyed).
  • Modification → You received it, but the content was changed (Integrity destroyed).
  • Fabrication → You received it, but the sender is fake (Authenticity destroyed).

Network Architecture and Protocol Comparison Table

OSI 7-Layer Responsibilities

OSI LayerEnglishPDUResponsibilityCommon Protocols & Technologies
L7 ApplicationApplicationDataFaces users or applications directly, provides network service interfaces (e.g., browsing web, transferring files).HTTP/HTTPS, FTP, DNS, DHCP, SSH
L6 PresentationPresentationDataResponsible for data format conversion, encryption/decryption, compression, ensuring different systems can understand data formats.SSL/TLS, JPEG, Base64
L5 SessionSessionDataEstablishes, manages, and terminates sessions between two ends, handles connection synchronization and dialogue control.NetBIOS, RPC
L4 TransportTransportSegment/DatagramProvides reliable or fast end-to-end (Port to Port) transmission, responsible for flow control and error recovery.TCP (reliable), UDP (low latency)
L3 NetworkNetworkPacketResponsible for cross-network logical addressing (IP address) and routing, determines the best path from source to destination.IP (IPv4/v6), ICMP, BGP, OSPF
L2 Data LinkData LinkFrameResponsible for intra-network physical addressing (MAC address), frame encapsulation, and error detection, controls data transmission between nodes.Ethernet (MAC), ARP, VLAN, PPP
L1 PhysicalPhysicalBitDefines electrical, optical, or wireless signal specifications for physical transmission media, responsible for sending/receiving raw bitstreams.1000BASE-T, Wi-Fi RF, fiber, coaxial cable

TCP/IP Model vs OSI Comparison

The TCP/IP model has two versions: the "Classic 4-Layer" and the "Modern Practical 5-Layer." The one defined by the US Department of Defense (DoD) in the early days (RFC 1122) is 4 layers. With the development of network hardware technology, mainstream textbooks and network management certifications (e.g., Cisco CCNA) generally adopt the 5-layer model (also called the hybrid model).

OSI LayerTCP/IP 4-Layer (Classic DoD)TCP/IP 5-Layer (Modern Practical)
L7 App / L6 Pres / L5 SessApplicationApplication
L4 TransportTransportTransport
L3 NetworkInternetNetwork
L2 Data LinkNetwork AccessData Link
L1 Physical⬆️ (Merged into Network Access)Physical

TIP

  • Reason for merging: L5~L7 merged because modern applications handle session, encryption, and business logic; L1~L2 merged because physical NICs and drivers usually operate bound together.
  • PDU and debugging correspondence: Find Port number → Check L4 (Wireshark); Find IP address → Check L3 (Ping, routing table); Find MAC address → Check L2 (Switch, VLAN).
  • TCP protocol vs TCP/IP model: TCP is a single transport protocol at L4; TCP/IP is the collective name for the entire L1~L7 internet communication architecture, even if UDP is used at the bottom, it still runs within the TCP/IP model framework.
  • For the complete encapsulation/decapsulation process of PDU at each layer, see the PDU Comparison Table.

💡 Protocol Abbreviation Quick Check

Protocol Number is located in the L3 IPv4 header, identifying the upper-layer protocol used by the Payload (different from the Port number at L4, which identifies the application).

Protocol NumberProtocolDescription
1ICMPNetwork diagnostics and error reporting (used by ping)
6TCPReliable transmission, has handshake and retransmission mechanisms
17UDPLow-latency transmission, no connection guarantee
47GREVPN tunnel encapsulation protocol
50ESPIPsec encrypted packet (common in VPN)
51AHIPsec authentication header, verifies only, does not encrypt
89OSPFInternal dynamic routing protocol

Other Protocol Abbreviations

AbbreviationFull NameChinese
ARPAddress Resolution ProtocolAddress Resolution Protocol
BGPBorder Gateway ProtocolBorder Gateway Protocol
802.1QVLAN encapsulation standard, adds VLAN Tag to Ethernet frame (VLAN is a technical concept, 802.1Q is the implementation protocol)
PPPPoint-to-Point ProtocolPoint-to-Point Protocol, used for identity verification and line establishment for point-to-point connections, modern broadband (PPPoE) still uses its variants
RPCRemote Procedure CallRemote Procedure Call, allows programs to call functions on remote servers as if calling local functions, gRPC is the modern mainstream implementation
NetBIOSNetwork Basic Input/Output SystemNetwork Basic Input/Output System, underlying protocol for early Windows Network Neighborhood, should be disabled in modern environments

NetBIOS Security Risks

NetBIOS is an extremely outdated protocol; it is strongly recommended to disable it completely in modern enterprise networks because:

  • Generates massive broadcast traffic in the internal network, consuming bandwidth.
  • Attackers can use NBT-NS Poisoning to spread laterally in the enterprise internal network, a common infiltration technique for ransomware.
  • Modern Windows file sharing has switched to the more secure SMB (Port 445) and no longer requires NetBIOS.

PDU (Protocol Data Unit) Comparison Table

OSI LayerPDUEnglishDescription
L5-L7 ApplicationDataData / MessageRaw data generated by the application, not yet added with any protocol header.
L4 TransportSegment / DatagramSegment (TCP) / Datagram (UDP)TCP cuts data into segments and numbers them to ensure reliable transmission and order; UDP does not guarantee order and does not retransmit.
L3 NetworkPacketPacketAdds source/destination IP address, determines routing path.
L2 Data LinkFrameFrameAdds source/destination MAC address and FCS (Frame Check Sequence) error check code.
L1 PhysicalBitBitElectrical signals, optical signals, or radio waves.

Encapsulation and Decapsulation

Imagine the process of mailing a package: you put the document (Data) into an envelope, write the recipient Port (L4), put it in an outer box with an address (L3 + IP), and finally hand it to the courier (L2 converted to frame, L1 converted to electrical signal). Each extra layer is called "Encapsulation"; the recipient side tears open each layer in order, called "Decapsulation."

Encapsulation order: Data → Segment / Datagram → Packet → Frame → Bit

Segment is the packaging unit for TCP, Datagram for UDP.

TCP and UDP

TCP and UDP are both L4 transport layer protocols, determining "how to send" data, regardless of "where to send" (that is the job of L3 IP).

AspectTCPUDP
Full NameTransmission Control ProtocolUser Datagram Protocol
Connection ModeConnection-oriented (establishes connection before sending data)Connectionless (sends directly)
ReliabilityGuaranteed delivery: sequence numbers, acknowledgment responses, timeout retransmissionNot guaranteed: packets may be lost or out of order
SpeedSlower (handshake and acknowledgment have extra overhead)Faster (no handshake and acknowledgment)
Common ApplicationsHTTP/HTTPS, SSH, SMTP, FTPDNS query, VoIP, streaming media, QUIC

TCP Three-way Handshake

TCP must complete three steps before sending data to confirm both sides can send and receive:

Common TCP Flags

FlagDescription
SYNRequest to establish connection
ACKAcknowledgment received
FINRequest to terminate connection normally
RSTForce reset connection (abnormal interruption)
PSHRequest immediate delivery to application layer, do not wait for buffer to fill

Security Relevance

  • SYN Flood: Attacker sends massive SYN requests but deliberately does not complete the handshake, exhausting the server's half-open connection table, a DoS attack.
  • TCP RST Injection: Attacker forges RST packets to force-terminate legitimate connections, can be used to interfere with communication or censorship (commonly used by firewalls).
  • UDP Amplification: UDP's connectionless nature allows attackers to forge source IPs, using services with responses much larger than requests (e.g., DNS, NTP) to amplify attack traffic, see the L3-L7 Denial of Service Attack section.

Common Port Comparison Table

The Port number is located in the L4 TCP/UDP header and is used to identify which application the packet should be handed to. For example, when a browser connects to a server, the server receives the request on Port 80 or 443, letting the operating system know to hand the traffic to the web service rather than other programs.

Port numbers are divided into three categories based on range:

RangeEnglish NameChinese NameDescription
0–1023Well-Known PortsWell-Known PortsIANA officially assigns to common services; binding to this range on Linux/Unix requires root privileges
1024–49151Registered PortsRegistered PortsThird-party applications register with IANA for use, e.g., MySQL (3306), RDP (3389)
49152–65535Ephemeral PortsDynamic PortsTemporary connections dynamically assigned by the operating system to clients, released after the connection ends

IANA (Internet Assigned Numbers Authority)

Responsible for managing global internet number resources, including the allocation and registration of IP addresses, AS numbers, and Port numbers.

Developers usually do not need to apply for Port numbers from IANA; Ports in the Ephemeral Ports range can be used freely. Situations requiring application: developing a protocol or service intended to be public and become an industry standard, requiring application for a fixed Registered Port from IANA so other systems can identify which service the Port belongs to.

Common Service Ports

PortProtocolTransport LayerDescription
20FTP-DATATCPFTP data transmission
21FTPTCPFTP control connection
22SSHTCPEncrypted remote login and file transfer (SCP/SFTP)
23TelnetTCPPlaintext remote login, deprecated and insecure
25SMTPTCPEmail transmission (server to server)
53DNSTCP/UDPDomain name resolution (UDP for query, TCP for zone transfer)
67/68DHCPUDP67 for server side, 68 for client side, dynamic IP allocation
80HTTPTCPPlaintext web transmission
110POP3TCPEmail retrieval (deleted from server after download)
143IMAPTCPEmail retrieval (mail retained on server)
161/162SNMPUDP161 for query, 162 for Trap (active alert)
389LDAPTCPDirectory service query (plaintext)
443HTTPSTCPTLS encrypted web transmission
445SMBTCPWindows file sharing and Network Neighborhood
465SMTPSTCPSMTP over TLS (email encrypted transmission)
587SMTP SubmissionTCPEmail client submission (requires authentication)
636LDAPSTCPLDAP over TLS (encrypted directory query)
853DoTTCPDNS over TLS (encrypted DNS query)
993IMAPSTCPIMAP over TLS
995POP3STCPPOP3 over TLS
1433MSSQLTCPMicrosoft SQL Server
1521Oracle DBTCPOracle Database
3306MySQLTCPMySQL / MariaDB Database
3389RDPTCPWindows Remote Desktop Protocol
5432PostgreSQLTCPPostgreSQL Database
6379RedisTCPRedis cache database (default no authentication, do not expose directly)
8080HTTP AltTCPHTTP alternative Port, commonly used for development or Proxy
27017MongoDBTCPMongoDB Database

TLS Positioning in OSI Model and HTTPS / HSTS

TLS (Transport Layer Security) spans L5 (Session Layer) and L6 (Presentation Layer) in the OSI model, but is classified as part of the application layer in the TCP/IP model. TLS is not an independent transport protocol, but provides encryption and identity authentication on top of TCP and below the application layer. Version evolution and security of each version are detailed in the Encryption Protocol Version Evolution Comparison Table.

HTTPS (HTTP over TLS): HTTP traffic is encrypted and transmitted via TLS, default Port 443.

HSTS (HTTP Strict Transport Security): The server informs the browser via the HTTP header Strict-Transport-Security that subsequent requests to the domain must use HTTPS, preventing SSL Stripping attacks (attackers downgrading HTTPS to HTTP).

HSTS DirectiveDescription
max-age=31536000Time (in seconds) for the browser to remember this policy; one year in this example.
includeSubDomainsAll subdomains also force HTTPS.
preloadAdds the domain to the browser's built-in HSTS Preload List, eliminating the HTTP window before the first connection.

Joining the HSTS Preload List

General HSTS requires the user to successfully connect once, and the browser will remember "this website must use HTTPS." Before that, the first connection might still go over HTTP, leaving a window for attack. The Preload List is a list built into browsers at the factory; once a domain is listed, anyone visiting for the first time will be forced to use HTTPS.

Application process:

  1. Add Strict-Transport-Security header to HTTP response, must include preload, includeSubDomains, and max-age of at least one year.
  2. Submit domain application at hstspreload.org.
  3. Once approved, it is included in the Chromium source code, and then carried along when Chrome, Firefox, Edge, Safari update.

Note: includeSubDomains is a mandatory condition; all subdomains must support HTTPS. Once joined, if you want to remove it, you must apply separately and wait for the next browser version update to take effect; do not apply lightly.

SSL Stripping Attack

Attackers intercept the first HTTP request sent by the user, maintain an HTTPS connection with the server, and return a tampered HTTP page to the victim, causing account and password to be transmitted in plaintext. See SSL Stripping.

C# Example: HttpClient Configuring TLS 1.3
csharp
using System.Net.Http;
using System.Net.Security;
using System.Security.Authentication;

// Create HttpClientHandler specifying TLS version
SocketsHttpHandler handler = new() {
    SslOptions = new() {
        // Only allow TLS 1.2 and TLS 1.3
        EnabledSslProtocols = SslProtocols.Tls12 | SslProtocols.Tls13,
    },
};

using HttpClient client = new(handler);

// Send HTTPS request
HttpResponseMessage response = await client.GetAsync("https://example.com/api/data");
response.EnsureSuccessStatusCode();

string body = await response.Content.ReadAsStringAsync();
Console.WriteLine($"Response length: {body.Length} characters");

In practice, HttpClient lifecycle should be managed via IHttpClientFactory to avoid Socket exhaustion issues.

DNS Security

DNSSEC (DNS Security Extensions) adds digital signatures to DNS responses, ensuring the authenticity and integrity of query results, preventing DNS Spoofing and Cache Poisoning.

Why is DNSSEC needed?

The browser receives a DNS response saying "the IP of bank.com is 1.2.3.4," how does it confirm this isn't forged by an attacker?

  1. Domain signs itself: bank.com uses a private key to generate an RRSIG signature for all DNS records and publishes the corresponding DNSKEY for verification.
  2. Self-signing cannot be trusted: Anyone can generate a set of keys and sign them; having RRSIG and DNSKEY is not enough. The resolver also needs to confirm "whether this DNSKEY itself is legitimate."
  3. Parent issues credentials (DS record): The .com TLD stores a DS record in its own zone, which is the hash of the bank.com DNSKEY. The resolver compares them; if they match, it means .com recognizes this key as genuine.
  4. Trace back up: Is the .com DNSKEY legitimate? The Root Zone stores the DS record for .com, then compare further up.
  5. Starting point of trust: The Root Zone's public key is pre-built into all operating systems and resolvers (Trust Anchor), the only link in the entire chain that does not require external verification.

Chain of Trust

DNSSEC Record Types

Record TypeFull NameChineseDescription
RRSIGResource Record SignatureResource Record SignatureDigital signature of DNS records
DNSKEYDNS KeyDNS KeyStores the public key of the Zone, used to verify RRSIG
DSDelegation SignerDelegation SignerHash of the child zone's public key by the parent zone, establishes chain of trust
NSEC / NSEC3Next SecureNext Secure RecordProves a DNS record does not exist; NSEC3 adds salted hash to prevent zone enumeration
Windows / Linux DNS Query Command Examples
powershell
# Windows / Linux common: Query basic A record and MX record
nslookup example.com
nslookup -type=MX example.com

# Windows: Query A record of specified DNS Server
Resolve-DnsName -Name example.com -Server 1.1.1.1 -Type A -DnsOnly

# Windows: View DNSKEY published by the domain
Resolve-DnsName -Name example.com -Type DNSKEY -DnsOnly

# Windows: View DNSSEC signature records
Resolve-DnsName -Name example.com -Type RRSIG -DnsOnly
bash
# Linux: Basic query and concise output
dig example.com
dig example.com MX
dig example.com +short

# Linux: Query A record from specified DNS Server
dig @1.1.1.1 example.com A

# Linux: View DNSKEY published by the domain
dig example.com DNSKEY

# Linux: View DS record of parent zone
dig com DS
  • Resolve-DnsName can directly specify -Server, -Type, -DnsOnly, suitable for basic DNS and DNSSEC troubleshooting on Windows systems.
  • dig @server name type is the most common basic format, e.g., dig @8.8.8.8 example.com MX means "query a certain record from the specified DNS server."
  • nslookup is suitable for basic queries; when observing DNSSEC records or more complete response fields, Resolve-DnsName and dig will be clearer.

DNSSEC vs DNS Encryption Protocols Comparison:

ProtocolPurposeDescription
DNSSECIntegrity + AuthenticityVerifies DNS response has not been tampered with, query content remains plaintext
DoH (DNS over HTTPS)Encrypt QueryEncapsulates DNS queries in HTTPS requests, prevents query content from being eavesdropped, Port 443
DoT (DNS over TLS)Encrypt QueryEncrypts DNS queries with TLS, Port 853

Difference between DNSSEC and DoH / DoT

DNSSEC does not encrypt query content: Queries and responses remain plaintext, only ensuring the response has not been tampered with. If eavesdropping prevention is needed, it must be paired with DoH or DoT.

VPN (Virtual Private Network) Types and Protocols Comparison Table

VPN Connection Types and Technical Principles

TypeDescriptionTypical ProtocolApplicable Scenario
Site-to-Site VPNInterconnects two fixed networks via encrypted tunnel (e.g., Head Office ↔ Branch Office)IPsec (Tunnel Mode)Branch interconnection, cross-region data centers
Remote Access VPNIndividual users connect back to the enterprise internal network from outsideSSL/TLS VPN, IPsecRemote work, business travelers accessing internal resources

Site-to-Site VPN vs Remote Access VPN Technical Principle Differences

Technical AspectSite-to-Site VPNRemote Access VPN
Connection ArchitectureGateway-to-Gateway: VPN gateway devices at both ends are responsible for establishing the encrypted tunnel, internal hosts are unawareHost-to-Gateway: User-side device directly establishes a connection with the enterprise VPN server
Tunnel EstablishmentPersistent Tunnel: Once configured, the tunnel exists permanently (Always-On), maintaining connection status regardless of data transmissionOn-Demand: Tunnel is established only when the user connects, and disappears after disconnection, supports multiple users connecting simultaneously but independently
Network TopologyBridge Mode: "Bridges" two regional networks in different locations into one large logical network, devices at both ends can communicate directly using internal IPsRouting Mode: User obtains a virtual IP (usually IP Pool assigned by VPN server), uses routing table to determine which traffic goes through VPN
IP Address AssignmentStatic Routing: Subnets at both ends are fixed and non-overlapping (e.g., A side uses 192.168.1.0/24, B side uses 192.168.2.0/24), routing rules pre-configuredDynamic IP Assignment: Dynamically assigns virtual IPs from a DHCP Pool to connected users, supports IP address reuse
Authentication MechanismDevice Authentication: Based on Pre-Shared Key (PSK), digital certificates, or identity verification of both gateways, authenticates devices rather than individual usersUser Authentication: Based on username/password, digital certificates, or multi-factor authentication (MFA), each connected user must be independently verified
Scalability ConsiderationLow Scalability: Depends on topology choice for multi-site, see table belowHigh Scalability: VPN server can serve thousands of concurrent connections, only need to increase server computing power
Fault Impact ScopeLarge Single Point of Failure Impact: Gateway failure at one end completely interrupts network interconnection between the two locations, affecting all users at that locationSmall Single Point of Failure Impact: Individual user connection issues do not affect others, VPN server failure can be backed up by multiple servers

Site-to-Site VPN Multi-Site Topology

Hub-and-SpokeFull Mesh
ArchitectureAll branch sites only establish tunnels with the central HubEach site establishes tunnels directly with all other sites
ConnectionsN-1N×(N-1)/2
Traffic PathBranch-to-branch traffic must detour through HubDirect connection between sites, no detour
Management ComplexityLow, centralized in Hub configurationHigh, connection count grows rapidly as site count increases
LatencyHigher (one extra hop)Lower (direct connection)
Single Point of FailureHub failure interrupts communication between all branches; in practice, Hub needs HA (High Availability) backupFailure of any site only affects itself, other sites can still connect directly
Applicable ScenarioMany branches, Hub bandwidth sufficientFew sites, latency-sensitive or high traffic

VPN Protocol Comparison

ProtocolOperating LayerEncryption MethodCharacteristics
IPsecL3 Network LayerESP (AES + HMAC)Industry standard, suitable for Site-to-Site; supports Transport / Tunnel dual modes, see IPsec Modes Comparison Table.
SSL/TLS VPNL4-L7TLSCan be used via browser or lightweight Client, no need to install dedicated software; suitable for Remote Access.
WireGuardL3 Network LayerChaCha20 + Poly1305Modern lightweight protocol, extremely small code base (~4000 lines), performance better than IPsec / OpenVPN.

Split Tunneling vs Full Tunneling

ModeBehaviorProsCons
Full TunnelingAll traffic goes through VPN tunnelHigh security, all traffic protected by enterprise security policyHigh bandwidth consumption, affects performance
Split TunnelingOnly enterprise resource traffic goes through VPN, others go through local networkSaves bandwidth, better user experienceLower security, local traffic not protected

VPN and Zero Trust

  • Traditional VPN's Full Tunneling ensures all traffic is inspected, but in a Zero Trust Architecture, every resource has independent access control (PEP), weakening the role of VPN.
  • Zero Trust does not necessarily cancel VPN, but VPN is no longer the only trust boundary.

VPN Protocol Details Supplement

IPsec IKE Two-Phase Negotiation

PhaseNamePurposeOutput
Phase 1IKE SA EstablishmentBoth parties negotiate encryption algorithms, verify identity, establish secure management channelISAKMP SA (Internet Security Association and Key Management Protocol SA)
Phase 2IPsec SA EstablishmentWithin the secure channel of Phase 1, negotiate encryption parameters for actual data transmissionIPsec SA (a pair of unidirectional SAs, each with an SPI identifier)
  • Phase 1 has two modes: Main Mode (6-step exchange, safer) and Aggressive Mode (3-step exchange, faster but identity protection is weaker).
  • Phase 2 uses Quick Mode, can negotiate multiple sets of IPsec SAs.
  • IKEv2 simplifies the negotiation process, merging Phase 1 + Phase 2 into 4 message exchanges (IKE_SA_INIT + IKE_AUTH).

WireGuard Technical Characteristics

  • Based on Noise Protocol Framework, uses a fixed combination of cryptography (ChaCha20, Poly1305, Curve25519, BLAKE2s), no need to negotiate encryption algorithms.
  • Adopts Fixed Public Key Pairing: Each Peer pre-configures the other's public key, simplifying the identity verification process.
  • Uses UDP only for transmission, no TCP mode.
  • Core code is about 4,000 lines (vs OpenVPN ~100,000 lines), easy for security auditing.
  • Connection establishment time is usually within 100ms (IPsec / OpenVPN usually takes seconds).

SSL/TLS VPN Two Modes

ModeDescriptionApplicable Scenario
ClientlessAccess Web applications via browser, no software installation requiredTemporary access, partners, BYOD devices
Full-tunnel ClientInstall dedicated client, all traffic transmitted via TLS tunnelRemote employees requiring full network access
  • Clientless mode only supports Web applications and some protocols (e.g., RDP over HTML5), functionality is limited but deployment is simplest.
  • Full-tunnel Client provides functionality equivalent to IPsec VPN, but has stronger ability to traverse firewalls via TLS (using TCP 443 Port).

IPsec Modes Comparison Table

AspectTransport ModeTunnel Mode
Encryption ScopeEncrypts Payload only (IP header not encrypted)Encrypts the entire original IP packet (including header), then wraps with a new IP header
IP Header VisibilityOriginal IP header retained, real source and destination IP visibleOriginal IP header encrypted and hidden, outer header shows VPN gateway IP
Typical UseEnd-to-End (Host-to-Host) communicationGateway-to-Gateway (Site-to-Site VPN) or Remote Access VPN
SecurityLower (attacker can know the IPs of both communicating parties)Higher (original IP hidden, only VPN gateway IP visible)
Packet Structure[Original IP Header][IPsec Header][Encrypted Payload][New IP Header][IPsec Header][Encrypted (Original IP Header + Payload)]

TIP

  • AH (Authentication Header): Provides integrity verification and source authentication only, does not provide encryption.
  • ESP (Encapsulating Security Payload): Provides encryption + integrity verification + source authentication.
  • In practice, most VPNs use ESP + Tunnel Mode.
  • Transport mode is common for secure communication between two hosts within the same local area network.

QUIC Protocol and HTTP/3

QUIC is a transport layer protocol developed by Google and standardized by IETF (RFC 9000). HTTP/3 uses QUIC as the underlying layer, replacing the TCP + TLS architecture of HTTP/2.

Comparison AspectHTTP/2 (TCP + TLS)HTTP/3 (QUIC)
Transport LayerTCPUDP (QUIC implements reliable transmission on top of UDP)
EncryptionTLS (layered, handshake independent)Built-in encryption (QUIC handshake merged with TLS 1.3)
Connection Establishment3-way TCP + TLS handshake (1–2 RTT)0-RTT or 1-RTT (known servers can restore connection in 0-RTT)
Head-of-Line Blocking (HOL Blocking)Exists at TCP layer (one packet loss blocks all streams)None (each stream is independent, single packet loss does not affect other streams)
Connection MigrationIP change requires connection reconstructionUses Connection ID, no disconnection when changing IP / changing network (e.g., mobile phone switching from Wi-Fi to mobile network)
Firewall/NAT TraversalTCP 443 widely allowedUDP 443, may be blocked in some environments

Security Considerations:

  • QUIC forces the use of TLS 1.3, cannot downgrade to weaker encryption versions, security is better than negotiable TLS 1.2.
  • Due to the use of UDP, traditional firewalls based on TCP connection state may not be able to deeply inspect QUIC traffic, posing visibility challenges for enterprises.
  • 0-RTT data may be subject to Replay Attacks; the server side must implement Idempotent protection for 0-RTT requests.

TIP

  • RTT (Round-Trip Time): The time it takes for a packet to be sent and a response received. Higher RTT means greater latency. Establishing a connection requires several round trips, requiring several RTTs of waiting time.
  • TCP Three-way Handshake: TCP must complete three steps (SYN → SYN-ACK → ACK) before sending data, consuming 1 RTT. HTTPS adds TLS handshake on top, totaling 2 RTTs before data transmission can begin.
  • Head-of-Line Blocking (HOL Blocking): TCP requires packets to arrive in order; one packet loss makes all subsequent packets wait for retransmission, even if other data streams are completely normal, they will be blocked.
  • Jitter (Transmission Latency Variation): The inconsistency of packet arrival time. RTT is "average round-trip latency," Jitter is the "fluctuation amplitude of latency." Real-time communications like VoIP and video conferencing are extremely sensitive to Jitter; high Jitter leads to choppy audio or frozen screens. Attackers can deliberately create massive Jitter through network congestion (e.g., DDoS) to reduce service quality or even trigger timeout disconnections.

NAC and 802.1X Authentication

NAC (Network Access Control) is a mechanism that performs two types of checks before a device connects to the network:

  • Identity Authentication: Confirm whether the device or user has the right to enter the network, usually implemented via 802.1X.
  • Posture Assessment: Confirm whether endpoint security settings meet policy, including OS patches, antivirus software enabled status, whether prohibited software (e.g., P2P download tools) is installed, and whether personal firewalls are enabled.

Devices that fail either check are isolated to a restricted VLAN, allowed only to access patch servers for self-repair, or directly denied connection.

NAC Architecture:

802.1X is responsible for the "Identity Authentication" part mentioned above. It is an IEEE standard Port-based Network Access Control, using EAP (Extensible Authentication Protocol) as the identity verification framework.

802.1X Three Roles:

RoleEnglishDescriptionCommon Implementation
SupplicantSupplicantDevice or software requesting network accessWindows built-in 802.1X Client, wpa_supplicant (Linux)
AuthenticatorAuthenticatorNetwork device controlling Port switchEnterprise-grade switches, wireless APs
Authentication ServerAuthentication ServerVerifies identity and determines authorizationRADIUS server (e.g., FreeRADIUS, Microsoft NPS)

802.1X Authentication Process:

EAP Common Methods Comparison:

EAP itself is just a framework; actual authentication strength depends on the chosen EAP method. The core difference lies in "which side requires a certificate" and "whether a TLS channel is established."

EAP MethodServer CertificateClient CertificateAuthentication TypeCharacteristics
EAP-MD5Unidirectional (Server cannot be verified)Weakest; cannot prevent man-in-the-middle attacks, no longer recommended
PEAPUnidirectional (Verify Server)Establishes TLS channel first, client authenticates via password (MSCHAPv2) inside the channel; most common in Windows enterprise environments
EAP-TTLSUnidirectional (Verify Server)Similar to PEAP, but supports more inner protocols (PAP, CHAP, MSCHAPv2, etc.); better cross-platform compatibility
EAP-TLSBidirectional (Mutual Authentication)Both sides require PKI certificates; highest security, but deployment cost is highest (requires managing client certificates)
EAP-FASTOptionalUnidirectional (Default)Proposed by Cisco; uses PAC (Protected Access Credential) instead of certificates, avoiding certificate management burden

Private IP and CIDR Subnetting

Private IP Range (RFC 1918)

Private IPs are only routed within the organization; external communication requires NAT conversion to public IPs. Public IPs are assigned by IANA/ISP and are globally unique; private IPs are managed by the organization itself and can be reused across different organizations.

Private RangeCIDRAvailable HostsCommon Scenario
10.0.0.0 – 10.255.255.25510.0.0.0/816,777,214Large enterprise internal network
172.16.0.0 – 172.31.255.255172.16.0.0/121,048,574Medium enterprise
192.168.0.0 – 192.168.255.255192.168.0.0/1665,534Home/Small office

CIDR Subnetting

In the CIDR prefix (/n), the first n bits are the network part, and the remaining (32 - n) bits are the host part.

Available hosts = 2^(32-n) - 2 (subtracting network address and broadcast address)

CIDRHost BitsAvailable HostsApplicable Scenario
/22101,022Large department (~1,000 units)
/248254General office network segment
/26662Small department
/28414Small isolated network segment
/3022Point-to-Point connection

Security Significance of Subnetting

Shrinking the subnet shrinks the Blast Radius: if one segment is breached, the impact scope is limited to hosts within that subnet.

Practical example: Finance department 10 people → /28 (14 available IPs), even other departments in the same office building cannot connect directly.

Network Segmentation

Network segmentation cuts large networks into multiple smaller security zones, limiting lateral movement; even if an attacker invades one segment, they cannot directly access resources in other segments.

ImplementationEnglishLayerDescription
VLANVirtual LANL2Establishes logically isolated Broadcast Domains on the same physical switch. Communication between different VLANs must pass through L3 routers or firewalls.
ACLAccess Control ListL3-L4Sets rules on routers or L3 switches to filter allowed or denied traffic based on IP address, Port number, and protocol.
DMZDemilitarized ZoneL3-L7Isolated buffer zone between external network (Internet) and internal network, usually places external services (Web server, Mail server, DNS). External can access DMZ, but cannot enter internal network directly; internal can access DMZ, but DMZ hosts cannot move laterally to internal network directly after being breached.
Firewall ZoneFirewall ZoneL3-L7Divides network into zones with different trust levels (e.g., Trust / Untrust / DMZ), cross-zone traffic is checked by firewall policy.
MicrosegmentationMicrosegmentationL2-L7In virtualized or cloud environments, sets security policies per workload (VM / Container), realizing core principles of Zero Trust architecture.

VLAN Security

VLAN (Virtual Local Area Network) cuts out multiple logically independent network segments on the same physical switch, allowing traffic from different departments (e.g., Finance VLAN 10, Engineering VLAN 20) not to interfere with each other, even if they share the same physical line. The implementation standard is 802.1Q, which adds a 4-byte VLAN Tag to the Ethernet frame, allowing the switch to determine which network segment each packet belongs to.

Port Types

Ethernet frames sent by terminal devices (computers) do not carry VLAN tags themselves, but the switch must know which VLAN this frame belongs to when forwarding. The Port type determines how the switch handles this tag at each connection point.

TypeConnection ObjectDescription
Access PortTerminal device (computer, printer, IP Phone)Belongs to a single VLAN. Terminal devices send Untagged Ethernet frames; the switch adds the 802.1Q Tag of the VLAN the Port belongs to when receiving (Ingress); strips the Tag when forwarding out (Egress). Terminal devices themselves are unaware of the existence of VLANs.
Trunk PortBetween switches, switch to routerCarries traffic for multiple VLANs simultaneously. Frames are transmitted in Tagged 802.1Q format, the Tag remains unchanged throughout the Trunk link, and the receiving device must be able to interpret the VLAN Tag and forward it to the corresponding VLAN accordingly.
Native VLANThe VLAN to which untagged frames on a Trunk Port belong, defaults to VLAN 1, mainly used for compatibility with legacy devices that do not support 802.1Q. Attackers can use this mechanism to launch Double Tagging attacks (see next section), so the Native VLAN should be changed to an unused non-VLAN 1 number.
Scenario Comparison

Access Port — Department Office Door

Employees (terminal devices) do not need to wear any identification stickers inside the office because everyone in the office is from the same department. When an employee walks out of the department door, the guard at the door (Access Port) will stick a department identification sticker (VLAN Tag) on their back; it is torn off when entering the door. Terminal devices do not need to know about the existence of stickers at all; sticking and tearing are handled entirely by the switch.

Trunk Port — Building Shared Elevator

People from the Finance department and Engineering department share the same elevator (Trunk Port). To let the guard on another floor identify which department everyone belongs to, everyone must stick on a department sticker (Tagged) before entering the elevator, and the elevator keeps the sticker on without tearing it off throughout the journey. Upon arrival, the guard at the receiving end (another switch) sees the sticker and knows which department to guide the person to.

Native VLAN — People Without Stickers

If someone walks into the elevator without any stickers (Untagged frame arrives at Trunk Port), the building has a default rule: everyone is classified as "default identity" (Native VLAN, defaults to VLAN 1). Attackers can use this rule to add two layers of tags to the frame; the outer layer matches the Native VLAN, the switch strips the outer layer, and the inner tag sends the traffic into an unauthorized VLAN (i.e., Double Tagging attack).

Frame Transmission Process

VLAN Hopping Attack

Attackers exploit VLAN configuration flaws to cross VLAN boundaries and access unauthorized network segments.

Double Tagging

Vulnerability Principle: Exploits the default behavior of switches to "automatically strip tags" when processing Native VLANs to smuggle malicious packets into unauthorized network segments. This is a one-way blind attack.

Trigger Conditions:

  1. Environment Topology: There must be 2 or more switches in the attack path, connected via a Trunk Port.
  2. Attacker Location: The Port where the attacker is located must belong to a VLAN that matches the Native VLAN (usually defaults to VLAN 1) of the Trunk.

Malicious Payload Structure:

The attacker self-forges a special frame with two 802.1Q tags: [Outer Tag: Native VLAN (e.g., VLAN 1)] + [Inner Tag: Target Segment (e.g., VLAN 10)] + [Malicious Data].

Attack Execution Pipeline:

  1. Switch 1 (Vulnerability Trigger Point): Receives the attacker's packet, prepares to send it into the Trunk channel. Upon discovering "Outer Tag = Native VLAN," it triggers the system default rule: strip the outer tag.
  2. Trunk (Smuggling Channel): The outer tag is torn off, the hidden [Inner Tag: VLAN 10] is exposed and transmitted in this state within the Trunk.
  3. Switch 2 (Victim Forwarding Node): Receives the packet from the Trunk. Switch 2 only sees [Inner Tag: VLAN 10] and, following normal logic, forwards it to the target host in VLAN 10 without any defense.

This attack is naturally one-way: the target host's response frame follows the normal VLAN 10 path and cannot return to the attacker, so it is mostly used to trigger target behavior (e.g., ARP poisoning, service probing), rather than two-way data theft.

Switch Spoofing (Disguised Trunk)

Most switches have DTP (Dynamic Trunking Protocol) enabled by default, which automatically negotiates whether to establish a Trunk connection with the other side. The attacker's host sends DTP negotiation messages, inducing the switch to promote the Port to a Trunk Port. Once negotiation succeeds, the attacker's host begins receiving tagged frames for all VLANs, and VLAN isolation completely fails.

Defense MeasureDefense ObjectDescription
Modify Native VLANDouble TaggingChange Native VLAN to an unused non-VLAN 1 number, making it impossible for the attacker to match the outer tag
Disable DTPSwitch SpoofingExplicitly set Port to Access mode and disable auto-negotiation, prohibiting unauthorized devices from negotiating Trunk
Disable Unused PortsBothSet idle Ports to shutdown and assign to an isolated VLAN, reducing the attack surface

SDN Security

In traditional networks, each switch and router contains its own Control Plane (decision logic) and Data Plane (packet forwarding). If security rules need updating, they must be configured device by device, and any missed device forms a gap. SDN decouples the control plane from the device and centralizes it into the SDN Controller, with devices only responsible for executing forwarding instructions.

Three-Layer Architecture

ComponentDescription
Application LayerNetwork applications (e.g., load balancing, firewall policies), pass strategies to the controller via Northbound API (usually REST API).
Control LayerSDN Controller (e.g., OpenDaylight, ONOS), receives instructions from the application layer, calculates global forwarding rules, and issues them to devices.
Infrastructure LayerPhysical switches/routers, receive instructions from the controller via Southbound API (e.g., OpenFlow), do not make routing decisions themselves.
Scenario Comparison

Traditional networks are like every floor in a building having its own guard room, each guard having their own manual of entry rules, making their own judgments on whether to let people pass. Updating rules means running to every guard room, and missing one creates a vulnerability.

SDN's approach is to set up a central control room (SDN Controller) and abolish the rule manuals in each guard room. Guards no longer make their own judgments; they report to the central control room upon encountering anyone, and the central control room makes a unified decision on whether to let them pass or intercept them, then transmits the instructions back to the guard for execution.

  • Northbound API: Management personnel (application layer) issue policies to the central control room via an internal communication system (REST API), e.g., "block this IP."
  • Southbound API: The central control room issues specific instructions to guards on each floor via a dedicated channel (OpenFlow).

Security Advantages:

  • Centralized Security Policy: Define security rules uniformly on the controller and deploy them to all devices at once, avoiding inconsistencies caused by device-by-device configuration.
  • Dynamic Response: When abnormal traffic is detected, the controller can modify global forwarding rules in real-time to isolate infected hosts without operating device by device.
  • Network Visibility: The controller masters the global traffic state, facilitating anomaly detection and forensic analysis.

Security Risks:

  • Controller is a Single Point of Failure: Once the controller is breached, the forwarding rules of the entire network are controlled. Must deploy HA (High Availability) clusters and strictly restrict controller access permissions.
  • Northbound API Attack: If an attacker gains application layer access, they can issue malicious instructions to the controller via REST API (e.g., open all traffic, bypass firewall rules). Must implement API authentication and authorization.
  • Southbound API Attack: If an attacker can insert forged OpenFlow messages, they can directly manipulate switch forwarding rules. Communication between the controller and devices must be forced to use TLS encryption.

IPv6 Security Considerations

IPv4 uses 32-bit addresses (approx. 4.3 billion), which have long been exhausted. IPv6 expands the address to 128 bits (approx. 3.4×10³⁸), which is the long-term replacement solution. Due to the massive IPv4 infrastructure, most networks are currently in a Dual Stack transition period, running IPv4 and IPv6 simultaneously. This parallel state brings new security considerations: if security policies are designed only for IPv4, IPv6 traffic becomes a blind spot.

IPsec is a mandatory support (but not mandatory to enable) in the IPv6 specification, providing native end-to-end encryption capability, an inherent security advantage that IPv4 does not have.

Security RiskDescription
Dual Stack RiskWhen IPv4 and IPv6 are enabled simultaneously, if IPv6 lacks corresponding security policies (firewall rules, IDS intrusion detection system signatures), attackers can bypass defenses targeted only at IPv4.
RA ForgeryIPv6 uses SLAAC for automatic address assignment, a process dependent on RA messages sent by routers. Attackers can send forged RAs, setting themselves as the default gateway, an effect similar to IPv4 ARP Spoofing.
IPv6 Tunnel AbuseTransition mechanisms like Teredo, 6to4 encapsulate IPv6 packets in IPv4 for transmission, potentially bypassing firewalls and IDS that do not support IPv6.
Address Reconnaissance DifficultyA /64 subnet in IPv6 contains 2⁶⁴ addresses, traditional IP-by-IP scanning is infeasible. Attackers switch to DNS queries, Multicast addresses, or EUI-64 (rules for deriving IPv6 addresses from device MAC addresses) for reconnaissance.
Defense MeasureDescription
Block Unused IPv6 TrafficIf the organization has not deployed IPv6, explicitly block IPv6 traffic on the firewall, including Ports used by Teredo, 6to4 tunnels, to avoid becoming a security blind spot.
Dual Stack Policy SynchronizationIf dual stack is deployed, firewall rules, IDS/IPS signatures, and log records must cover IPv4 and IPv6 simultaneously, not just protect one end.
Enable RA GuardEnable RA Guard on switches to allow only legitimate routers to send RA messages, blocking forged Router Advertisements.

💡 Terminology Quick Check

  • SLAAC: Stateless Address Autoconfiguration — IPv6 devices do not need a DHCP server, automatically generating their own IP addresses via RA messages.
  • RA: Router Advertisement — Messages broadcast periodically by routers to inform devices in the subnet of the gateway address and network prefix.
  • EUI-64: Extended Unique Identifier (64-bit) — Converts a device's 48-bit MAC address into a 64-bit interface identifier, used for the host part of automatically generated IPv6 addresses.
  • Teredo / 6to4: IPv6 over IPv4 Tunneling Transition Mechanism — Encapsulates IPv6 packets in IPv4 for transmission, allowing pure IPv4 environments to connect to IPv6 networks.
  • RA Guard: Router Advertisement Guard — Switch function, allows only specified legitimate router Ports to send RA messages, preventing forgery.
  • Dual Stack: Simultaneously enables IPv4 and IPv6 on the same device, currently the most common transition deployment method.

Wireless Network Encryption Comparison Table

The WPA (Wi-Fi Protected Access) series is a Wi-Fi security certification standard released by the Wi-Fi Alliance, starting from the structural vulnerabilities of WEP (Wired Equivalent Privacy), and has been strengthened generation by generation through WPA → WPA2 → WPA3 to enhance encryption and authentication mechanisms.

StandardEncryption AlgorithmAuthentication MethodKey LengthKnown VulnerabilitiesCurrent Recommendation
WEPRC4Open/Shared Key40/104 bitIV repetition attack, key can be cracked in minutesDisable
WPATKIP (RC4 improvement)PSK / 802.1X128 bitTKIP has Michael attack, designed as a transition scheme for WEPNot recommended
WPA2AES-CCMPPSK / 802.1X128 bitKRACK attack (Key Reinstallation Attack), can force reuse of NonceStill usable, but should upgrade to WPA3
WPA3AES-GCMP / AES-CCMPSAE / 802.1X128/192 bitDragonblood attack (patched)Recommended

Known Vulnerability Description

  • IV Repetition Attack (WEP): WEP uses a 24-bit Initialization Vector (IV) combined with RC4 for stream encryption. The IV space is only 2²⁴ ≈ 16 million, which is exhausted quickly in busy networks and starts repeating. When two packets use the same IV, ciphertext XOR can cancel out the key stream, thereby restoring plaintext or reversing the key.
  • Michael Attack (WPA/TKIP): TKIP (Temporal Key Integrity Protocol) is the encryption protocol for WPA, designed to strengthen WEP vulnerabilities without replacing old hardware. The underlying layer still uses RC4, but adds per-packet keys (to avoid IV repetition) and message integrity code Michael (to prevent forgery). Michael's design strength is insufficient; attackers can forge packets and pass verification in a short time; TKIP was an emergency transition scheme, and its overall design has inherent limitations, eventually replaced by WPA2's AES-CCMP.
  • KRACK (WPA2): Key Reinstallation Attack. Attackers replay message 3 of the four-way handshake, forcing the client to reinstall the key and reset the Nonce to 0. After the Nonce repeats, the AES-CCMP encryption protection fails, and attackers can decrypt packets or forge data. See the four-way handshake explanation below.
  • Dragonblood (WPA3/SAE): Side-channel attack against early implementations of SAE, inferring passwords by measuring timing differences or cache access behavior during the Dragonfly calculation process. The Wi-Fi Alliance has patched this in WPA3 Revision 1; current correct implementations are not affected.

WPA2 Four-Way Handshake

The four-way handshake confirms during connection that both parties hold the same PMK (Pairwise Master Key, derived from the passphrase), and negotiates the PTK (Pairwise Transient Key) used to encrypt unicast data and the GTK (Group Temporal Key) used for broadcast. The PMK itself is not transmitted over the network; both parties calculate it from the passphrase.

KRACK Attack Process

The attacker disguises as a man-in-the-middle, intercepts message 4 (ACK), causing the AP to mistakenly believe the client did not receive message 3, and the AP retransmits message 3 according to the protocol. Each time the client receives message 3, it reinstalls the key and resets the Nonce to 0. After the Nonce repeats, the same key encrypts different packets, and the attacker can restore the key stream, thereby decrypting or forging packets.

WPA3 and SAE

WPA2 vs WPA3 Core Differences:

AspectWPA2WPA3
Key Exchange4-Way Handshake (PSK)SAE (Dragonfly protocol)
Offline Dictionary AttackFeasible (offline brute force after capturing handshake packets)Infeasible (each handshake requires interaction, cannot be offline brute-forced)
Forward Secrecy (PFS)Not supportedSupported (SAE generates independent session keys each time)
Open NetworkNo encryptionOWE (Opportunistic Wireless Encryption)
Enterprise SecurityWPA2-EnterpriseWPA3-Enterprise (192-bit CNSA suite)

SAE (Simultaneous Authentication of Equals)

SAE is based on the Dragonfly key exchange protocol (RFC 7664). The core idea of Dragonfly is to map the password to a point on an elliptic curve (PWE, Password Element), and perform key exchange based on this; the password itself is never transmitted over the network, nor does it leave any material that can be used for offline comparison. Attackers must interact with the other party for a complete handshake for every guess, unlike WPA2 where they can capture packets once and offline brute-force the password infinitely.

SAE handshake is divided into two stages:

  • Commit Stage: Both parties map the password and MAC address to a password element (PWE) on the elliptic curve, generate a set of random private values, calculate scalar and element, and exchange them. Even if an attacker captures these public values, they cannot verify password guesses without interacting with the other party.
  • Confirm Stage: Both parties calculate a confirmation code based on the negotiated shared key and verify it with each other, ensuring both sides know the correct password, completing bidirectional identity verification.

OWE (Opportunistic Wireless Encryption)

OWE is used for open networks without passwords (e.g., coffee shop Wi-Fi). Traditional open networks have no encryption at all, and anyone on the same network can eavesdrop on traffic. OWE allows each client and AP to perform an ECDH key exchange during connection, establishing an independent encrypted channel for that connection, preventing passive eavesdropping.

OWE does not verify the AP's identity (open networks have no password to bind the AP), so it cannot prevent Evil Twin Attacks, only providing eavesdropping protection, not identity verification.

WPA3 Version Description:

VersionApplicable ScenarioCore Characteristics
WPA3-PersonalHome / Small OfficeSAE replaces PSK, provides PFS
WPA3-EnterpriseGovernment / High-security environmentsSupports 192-bit security suite (CNSA, Commercial National Security Algorithm)
OWEPublic Wi-Fi (Open Network)Establishes independent encrypted channel for each user, no password required

Purdue Model (OT / ICS Layered Security Architecture)

The Purdue Model (Purdue Enterprise Reference Architecture, PERA) is an industrial control system (ICS / OT) security layered reference architecture, dividing the OT environment into six levels (Level 0–5) based on function, explicitly specifying requirements for equipment, functions, and security isolation at each level.

LevelNameEquipment / FunctionDescription
Level 0Field DevicesSensors, Actuators, MotorsBottom-level hardware directly controlling physical processes
Level 1ControlPLC (Programmable Logic Controller), RTU (Remote Terminal Unit), DCSReceives sensor signals, controls actuators based on logic
Level 2SupervisorySCADA (Supervisory Control and Data Acquisition), HMI (Human Machine Interface)Operator interface, monitors and controls Level 1 equipment; security incidents often spread from this level
Level 3Manufacturing OperationsMES (Manufacturing Execution System), Historian data collection serverScheduling, production tracking, recording manufacturing data
Level 3.5 (iDMZ)Industrial DMZProxy, Jump Server, Data Diode, FirewallBuffer isolation zone between OT (Level 3) and IT (Level 4); ensures there is absolutely no direct network connection between IT and OT, all data exchange must pass through proxy servers or jump servers in the iDMZ, effectively preventing IT-layer malware from directly entering the OT core.
Level 4–5Enterprise and External NetworkERP, business systems, InternetTraditional IT environment

OT Security Key Principles

  • The boundary between Level 1 (PLC/RTU) and Level 2 (SCADA/HMI) is the lateral movement path most often exploited by attackers: after invading SCADA (Level 2), they can issue commands downward, directly affecting physical processes at Level 1 (e.g., Ukraine power grid attack incident).
  • iDMZ (Industrial DMZ): Corresponds to Level 3.5, is a necessary buffer isolation zone between OT and IT; data should be transmitted unidirectionally through proxy servers or data diodes in the iDMZ to prevent external threats from entering OT in reverse.
  • Purdue vs Zero Trust: Zero Trust requires verifying identity for every request, but many PLC devices in OT environments do not have authentication capabilities, so Zero Trust in OT environments is mainly implemented in network boundary control between Level 3–4.

💡 Terminology Quick Check

  • IT: Information Technology — Computer systems that process data, communication, and business logic, e.g., servers, databases, ERP. The core difference from OT is that IT prioritizes Confidentiality, while OT prioritizes Availability.
  • OT: Operational Technology — Hardware and software systems that directly control physical equipment and industrial processes.
  • ICS: Industrial Control System — The upper-level collective term for OT, covering control systems like PLC, SCADA.
  • PLC: Programmable Logic Controller — Industrial computer that receives sensor signals and controls actuators based on set logic.
  • RTU: Remote Terminal Unit — Data acquisition and control equipment deployed in the field, communicating with SCADA.
  • DCS: Distributed Control System — Industrial control architecture that distributes control functions to multiple nodes, commonly used in large manufacturing plants.
  • SCADA: Supervisory Control and Data Acquisition — System that centrally monitors multiple field devices, operators monitor and control via HMI interface.
  • HMI: Human Machine Interface — Graphical interface for operators to interact with industrial control systems.
  • MES: Manufacturing Execution System — Information system that manages production scheduling and tracks manufacturing progress.
  • ERP: Enterprise Resource Planning — Management system that integrates enterprise resources such as finance, HR, supply chain, etc., belongs to the IT side.
  • Data Diode: Hardware device that only allows data to flow unidirectionally, commonly used at the OT / IT boundary to prevent external threats from entering OT in reverse.

Linux / Windows Network Security Tools Comparison Table

Firewall Configuration

FunctionWindowsLinux
Built-in FirewallWindows Defender Firewalliptables / nftables / firewalld
CLI Managementnetsh advfirewall / PowerShell New-NetFirewallRuleiptables / nft / firewall-cmd
GUI Managementwf.msc (Windows Firewall with Advanced Security)Cockpit Web Console / UFW (Uncomplicated Firewall)
Rule PersistenceAutomatically savediptables requires iptables-save / iptables-restore; firewalld persists with XML configuration files

Linux Firewall Evolution

  • iptables: Classic packet filtering tool for Linux 2.4+, based on the Netfilter framework. Rules are organized by Chain and Table.
  • nftables: Introduced in Linux 3.13+, successor to iptables. Syntax is more unified, performance is better, and it has gradually replaced iptables.
  • firewalld: Default dynamic firewall management tool for RHEL / CentOS / Fedora, can use iptables or nftables at the bottom. Supports Zone concept and real-time rule changes.
iptables / nftables Firewall Rule Examples

iptables Common Rules

bash
# Allow established connections and related traffic
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# Allow SSH (Port 22)
iptables -A INPUT -p tcp --dport 22 -j ACCEPT

# Allow HTTPS (Port 443)
iptables -A INPUT -p tcp --dport 443 -j ACCEPT

# Default drop all inbound traffic
iptables -P INPUT DROP

# Save rules (Debian/Ubuntu)
iptables-save > /etc/iptables/rules.v4

nftables Equivalent Rules

bash
# Create table and chain
nft add table inet filter
nft add chain inet filter input { type filter hook input priority 0 \; policy drop \; }

# Allow established connections
nft add rule inet filter input ct state established,related accept

# Allow SSH and HTTPS
nft add rule inet filter input tcp dport { 22, 443 } accept

# Save rules
nft list ruleset > /etc/nftables.conf

Network Diagnostic Tools

FunctionWindowsLinuxDescription
IP Config Queryipconfig / ipconfig /allip addr / ifconfig (deprecated)View IP, subnet mask, gateway of network interface cards.
Connection Statusnetstat -anss -tulnp / netstat -tulnpView current TCP/UDP connections and listening Ports. ss is the modern replacement for netstat.
Route Tracetracerttraceroute / mtrTrace the route jump path of packets to the destination. mtr combines ping and traceroute.
DNS Querynslookup / Resolve-DnsNamedig / nslookupUse nslookup for general queries; for DNSSEC records, refer to the centralized examples in the DNS Security section.
ARP Table Queryarp -aip neigh / arp -aView the local ARP cache.
Common Network Diagnostic Command Examples
bash
# Trace route (find where packets are dropped or latency spikes)
tracert 8.8.8.8                     # Windows
traceroute 8.8.8.8                  # Linux

# View listening Ports (confirm which services are exposed externally)
netstat -ano | findstr LISTENING    # Windows
ss -tulnp                           # Linux (-t TCP, -u UDP, -l listening, -n no name resolution, -p show process)

0.0.0.0 / 127.0.0.1 / localhost Easily Confused Security Traps

AddressSemanticsSecurity Risk
127.0.0.1Loopback address, packets do not leave the hostService is limited to local access, cannot be connected from outside.
localhostHostname, defaults to 127.0.0.1 (IPv6 environment is ::1)/etc/hosts can be tampered with to point to other addresses; in dual-stack environments, if firewall rules only block 127.0.0.1, ::1 might be missed.
0.0.0.0Server binding context: listens on all network interfaces, including external interfacesMisusing 0.0.0.0 during development exposes services intended only for local access directly to the external network.

Common traps:

  • Database (Redis, MySQL) in development environment bound to 0.0.0.0 and then put online, lacking external firewall protection, directly exposed to the public network.
  • 127.0.0.1 inside a container is limited to the container itself, the host cannot access it; if host access is needed, explicitly bind to 0.0.0.0 or the host IP, and pair with firewall to restrict source IP.

Network Sniffing and Packet Capture

ToolPlatformDescription
WiresharkWindows / Linux / macOSGUI packet analysis tool, supports parsing hundreds of protocols.
tcpdumpLinux / macOSCLI packet capture tool, lightweight and efficient, suitable for server environments.
tsharkWindows / Linux / macOSCLI version of Wireshark, suitable for script automation.
tcpdump Common Commands
bash
# Capture all traffic on eth0 interface
tcpdump -i eth0

# Capture only HTTPS traffic (Port 443)
tcpdump -i eth0 port 443

# Capture traffic for a specific host
tcpdump -i eth0 host 192.168.1.100

# Capture TCP SYN packets (first step of three-way handshake)
tcpdump -i eth0 'tcp[tcpflags] & tcp-syn != 0'

# Save capture results to pcap file (for Wireshark analysis)
tcpdump -i eth0 -w capture.pcap -c 1000

# Read pcap file
tcpdump -r capture.pcap

# Exclude SSH traffic (avoid capturing own remote connection)
tcpdump -i eth0 not port 22
Wireshark Display Filter Common Syntax
text
# Filter by protocol
http
dns
tls
tcp
arp

# Filter by IP address
ip.addr == 192.168.1.100
ip.src == 10.0.0.1
ip.dst == 172.16.0.0/12

# Filter by Port
tcp.port == 443
tcp.dstport == 80

# Filter by HTTP method
http.request.method == "POST"

# Filter by DNS query name
dns.qry.name == "example.com"

# Filter by TLS version
tls.handshake.version == 0x0304  # TLS 1.3

# Filter by TCP Flag
tcp.flags.syn == 1 && tcp.flags.ack == 0  # SYN (new connection)
tcp.flags.reset == 1                       # RST (connection reset)

# Combined conditions
ip.addr == 192.168.1.0/24 && tcp.port == 443 && tls
  • Display Filter: Filter display among captured packets, syntax as above.
  • Capture Filter: Filter during capture, syntax is BPF (Berkeley Packet Filter), same as tcpdump (e.g., host 192.168.1.100 and port 443).

Attack Techniques

Malware and Attack Chains

Malware Type Comparison Table

TypeIndependent Survival (Needs Host?)Spread and Trigger MethodMain Purpose and CharacteristicsTypical Example / Note
VirusNo (needs host file)Requires user to click or execute host fileDestroy system, infect other filesMacro virus, CIH
WormYes (independent executable)Actively scans network vulnerabilities, self-replicates and spreadsConsumes network bandwidth and system resourcesBlaster, SQL Slammer
TrojanYes (disguised as normal software)Deceives users into actively downloading and installingSteal secrets, open backdoors for remote controlBanking Trojan, RAT (Remote Access Trojan)
RansomwareYesPhishing emails, vulnerabilities, or Trojan implantationEncrypts files, demands ransom for decryptionWannaCry (has worm-like spread characteristics), LockBit
SpywareYesBundled with free software or malicious web pagesKeylogging, monitoring browsing behavior, stealing passwordsKeylogger
Logic BombNo (usually implanted in normal programs)Triggers when specific conditions are met (e.g., specific date, specific operation)Internal employee revenge, timed destructionMalicious database deletion script

OFAC Sanctions List Verification Obligation before Ransomware Payment

OFAC (Office of Foreign Assets Control) maintains a sanctions list (SDN List), listing countries, organizations, and individuals sanctioned by the US.

For enterprises headquartered or operating within the US (or involving USD transactions):

  • If you decide to pay the ransom, you must first verify whether the attacker/payee is on the OFAC sanctions list.
  • Providing funds to sanctioned hostile hacker organizations or terrorists is a serious federal felony, which may result in huge fines or even criminal prosecution, regardless of whether it was intentional (strict liability).
  • Even if they are not on the sanctions list, it is recommended to voluntarily disclose to OFAC after payment (voluntary disclosure can mitigate liability).

Confirming whether the target is on the list is not the primary purpose: Enterprises do not pay to ensure the attacker will decrypt, but to avoid becoming a source of funds for sanctioned entities; the latter belongs to the scope of negotiation and technical assessment.

Virus Advanced Variant Technology Comparison

  • Polymorphic Virus: Changes its own encryption signature every time it infects, but the decrypted malicious core code remains the same. The goal is to evade signature-based scanning by traditional antivirus software.
  • Metamorphic Virus: Rewrites its own code every time it infects (e.g., replacing instructions, inserting junk code), the appearance and structure are completely different, but the malicious behavior executed is the same, making it the most difficult type to detect.

Packing & Obfuscation Technology

Antivirus software's static analysis scans files on the hard disk, comparing signatures of known malicious programs. Packing makes the appearance on the hard disk completely different from the real code, thereby bypassing scanning:

Packing Process:

  1. Packer compresses/encrypts the original malicious code and merges it with the external shell program (Stub) responsible for unpacking into a new executable file.
  2. When antivirus scans, it sees compressed garbled code + Stub, finds no malicious signature → lets it pass.
  3. When the user executes, Stub runs first, unpacks/decrypts the original code and loads it into memory.
  4. Malicious code appears and executes only in memory.
TechnologyOperation MethodAttacker GoalCommon Tools/Techniques
PackingOriginal code compressed/encrypted and wrapped in Stub, loaded into memory by Stub upon executionMake the static appearance on the hard disk different from the real code, bypassing signature scanningUPX (general compression), Themida (commercial-grade protection), custom Packer
ObfuscationRewriting code structure (e.g., shuffling execution flow, inserting junk code), execution result remains unchangedIncrease time cost for reverse engineering and manual analysisVariable renaming, Control Flow Flattening, junk code insertion
CrypterAdvanced form of packing, protects Payload with high-strength encryption, decrypts dynamically upon executionAchieve FUD (Fully Undetectable), each generated file has a different signatureCustom Crypter

How to detect packing: Packed executables will show three anomalies on the hard disk: content looks like garbled code (loses regularity after compression/encryption), normal program error messages and API names disappear, and the list of Windows function calls is extremely simplified (Stub only needs a few APIs to load the real program into memory, other calls appear after unpacking).

Countermeasures: Sandbox dynamic analysis (let it run and observe behavior), Memory Dump (analyze after unpacking), behavioral detection instead of signature comparison.

Cyber Kill Chain Comparison Table

The Cyber Kill Chain proposed by Lockheed Martin breaks down APT (Advanced Persistent Threat) attacks into 7 stages:

StageEnglishChineseDescriptionTypical Activity
1ReconnaissanceReconnaissanceCollect target informationOSINT (Open-Source Intelligence), social media investigation, scanning public services
2WeaponizationWeaponizationCreate attack toolsCombine vulnerability exploits with Payload into deliverable weapons (e.g., malicious PDF)
3DeliveryDeliveryDeliver weapons to targetPhishing emails, malicious websites, USB drop
4ExploitationExploitationTrigger vulnerabilityExploit software vulnerabilities, zero-day attacks, user clicks malicious attachments
5InstallationInstallationImplant persistent backdoorInstall RAT (Remote Access Trojan), create scheduled tasks, modify registry keys
6Command & Control (C2)Command & ControlEstablish remote control channelConnect back to C2 server, receive instructions
7Actions on ObjectivesActions on ObjectivesAchieve final goalData theft, data destruction, lateral movement, ransomware

Cyber Kill Chain vs MITRE ATT&CK

Comparison AspectCyber Kill ChainMITRE ATT&CK
StructureLinear 7-stage chainTactics × Techniques matrix
GranularityHigh-level attack flowFine-grained techniques and sub-techniques
Applicable ScenarioUnderstanding overall attack flow, post-incident retrospectiveWriting detection rules, red/blue team drills
MaintenanceProposed by Lockheed Martin, updated less frequentlyContinuously updated by MITRE, community contribution
  • Kill Chain defense thinking: Interrupting the attack chain at any stage can stop the attack; the earlier the stage, the lower the cost.
  • ATT&CK defense thinking: Build detection rules for each technique to increase attacker costs and exposure risks.
  • Complementary: Use Kill Chain to understand the overall attack, use ATT&CK to build fine-grained detection capabilities.
  • Vulnerability exploitation techniques corresponding to each stage of Kill Chain: Exploitation stage commonly uses Buffer Overflow, ROP (Return-Oriented Programming); Installation stage commonly uses DLL Side-Loading.

Parentheses contain MITRE ATT&CK official tactic IDs (Tactic ID), which can be queried at attack.mitre.org by ID for all specific techniques under that tactic.

Kill Chain StageCorresponding ATT&CK TacticSupplement
ReconnaissanceReconnaissance (TA0043)One-to-one correspondence, collecting target information.
WeaponizationResource Development (TA0042)Obtain or build attack infrastructure and tools.
DeliveryInitial Access (TA0001)Phishing, vulnerability exploitation, supply chain attacks, etc.
ExploitationExecution (TA0002), Defense Evasion (TA0005)Execute code after triggering vulnerability, and evade detection.
InstallationPersistence (TA0003), Privilege Escalation (TA0004)Implant backdoor and escalate privileges to maintain access.
C2Command and Control (TA0011)Establish remote control channel.
Actions on ObjectivesCollection (TA0009), Exfiltration (TA0010), Impact (TA0040)Collect, exfiltrate data, or cause damage.

MITRE Defense-Side Frameworks:

In addition to ATT&CK (attacker perspective), MITRE has two defense-side frameworks:

FrameworkPositioningDescription
MITRE D3FENDEncyclopedia for defendersOrganizes defense techniques in a knowledge graph and precisely maps to ATT&CK attack techniques; each D3FEND technique indicates which ATT&CK TTPs it can defend against; content covers Harden, Detect, Isolate, Deceive, Evict.
MITRE ENGAGEActive defense and adversarial engagementFocuses on Deception and adversarial engagement, guiding defenders on how to use decoys (Honeypot / Honey Token), misleading information, etc., to actively expose TTPs, consume attack resources, and collect threat intelligence.

Positioning Distinction of the Three

  • ATT&CK: Describes "how the attacker attacks," for red teams to design attack chains, blue teams to build detection rules.
  • D3FEND: Describes "how the defender defends," is the defense-side mapping of ATT&CK.
  • ENGAGE: Describes "how the defender lures and counters," emphasizing active inducement of attackers into observable environments.
  • Distinguishing tips: If the question says "defense knowledge base mapping ATT&CK attack techniques in a graph" → D3FEND; if it says "framework for network deception and adversarial engagement" → ENGAGE.

Web and API Attacks

Cross-Site Scripting (XSS) Comparison Table

TypeMalicious Script Storage LocationTrigger ConditionImpact ScopeDefense Core Focus
Reflected XSSURL parameter (not stored on server)Deceive user into clicking a link with malicious PayloadSingle user who clicked the linkValidate and filter HTTP request parameters
Stored XSSDatabase or file system (permanently stored on server)Any user browsing the infected page (e.g., message board, forum)All users browsing the page (most lethal)Output Encoding, filter input content
DOM-based XSSFrontend browser DOM environment (server completely unaware)Triggered when frontend JavaScript reads malicious source and dynamically modifies DOMSingle user who triggered the DOM operationAvoid high-risk JS APIs like innerHTML

Common Confusion: XSS / CSRF / SSRF

The three names are similar but the attack targets and mechanisms are completely different:

Comparison AspectXSS (Cross-Site Scripting)CSRF (Cross-Site Request Forgery)SSRF (Server-Side Request Forgery)
Attack TargetUser's browserUser's authenticated SessionServer itself
Core MechanismInject malicious script into web page, execute in user's browserUse Cookie already saved in user's browser, forge request to target websiteInduce server to issue requests to internal or external resources
PrerequisitesWebsite does not correctly filter/encode outputUser is logged into target website and Session is validServer accepts URL provided by user and issues request
Attacker CapabilitySteal Cookie/Session, page defacement, keyloggingExecute operations as the victim (e.g., transfer, change password)Access internal services (e.g., cloud metadata API 169.254.169.254), Port scanning
Main DefenseOutput Encoding, CSP, HttpOnly CookieAnti-CSRF Token, SameSite Cookie, Referer verificationWhitelist URL/IP, prohibit private IP range, do not return full response
OWASP CategoryA03 InjectionA01 Broken Access ControlA10 SSRF
  • XSS attacks the user's browser (Client-side); CSRF borrows the user's authentication state to operate; SSRF attacks the server side.
  • CSRF does not require injecting any scripts, only requires the user to visit a malicious page while logged in.
  • In practice, XSS and CSRF are often used together: inject a script that automatically sends CSRF requests via XSS.
C# Defense XSS: Output Encoding and Input Validation

Incorrect Demonstration: Directly embedding user input into HTML

csharp
// ❌ Dangerous: Directly concatenating user input, attacker can inject <script>alert('XSS')</script>
app.MapGet("/greet", (string name) =>
    Results.Content($"<h1>Hello, {name}</h1>", "text/html"));

Correct Practice: Use HtmlEncoder for Output Encoding

csharp
using System.Text.Encodings.Web;

app.MapGet("/greet", (string name) => {
    string safeName = HtmlEncoder.Default.Encode(name);
    return Results.Content($"<h1>Hello, {safeName}</h1>", "text/html");
});
// Input <script>alert('XSS')</script>
// Output &lt;script&gt;alert(&#x27;XSS&#x27;)&lt;/script&gt; (browser displays as plain text)

ASP.NET Core Razor Automatic Encoding

When outputting variables using @ syntax in Razor views, it automatically performs HTML Encoding by default:

csharp
// @Model.Name in Razor is automatically encoded, safe
<p>@Model.Name</p>

// ❌ Dangerous: @Html.Raw() bypasses automatic encoding, disable unless content is confirmed safe
<p>@Html.Raw(Model.Name)</p>

CSP (Content Security Policy) Header Configuration

csharp
// Program.cs — ASP.NET Core Middleware setting CSP Header
app.Use(async (context, next) => {
    context.Response.Headers.Append(
        "Content-Security-Policy",
        "default-src 'self'; "
        + "script-src 'self'; "
        + "style-src 'self' 'unsafe-inline'; "
        + "img-src 'self' data:; "
        + "font-src 'self'; "
        + "connect-src 'self'; "
        + "frame-ancestors 'none'; "
        + "base-uri 'self'; "
        + "form-action 'self'"
    );

    // Also set other security headers
    context.Response.Headers.Append("X-Content-Type-Options", "nosniff");
    context.Response.Headers.Append("X-Frame-Options", "DENY");
    context.Response.Headers.Append("Referrer-Policy", "strict-origin-when-cross-origin");

    await next();
});
C# CSRF Token Verification Concept

ASP.NET Core has a built-in Anti-Forgery Token mechanism, defending against CSRF through a double-token pattern:

MVC Controller

csharp
// Form automatically generates hidden field __RequestVerificationToken
// Razor View:
// <form asp-action="Transfer" method="post">
//     @Html.AntiForgeryToken()
//     ...
// </form>

[HttpPost]
[ValidateAntiForgeryToken] // Verify Token, return 400 if mismatch
public IActionResult Transfer(TransferRequest request) {
    // Execute transfer logic
    return RedirectToAction("Success");
}

Minimal API / SPA Scenario (Manual Token Acquisition)

csharp
using Microsoft.AspNetCore.Antiforgery;

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddAntiforgery(options => {
    options.HeaderName = "X-XSRF-TOKEN"; // SPA sends back Token via this Header
});

WebApplication app = builder.Build();

app.MapGet("/antiforgery/token", (IAntiforgery antiforgery, HttpContext context) => {
    AntiforgeryTokenSet tokens = antiforgery.GetAndStoreTokens(context);
    // Write Request Token to Cookie, frontend JS reads and puts into Header
    context.Response.Cookies.Append("XSRF-TOKEN", tokens.RequestToken!,
        new CookieOptions { HttpOnly = false, SameSite = SameSiteMode.Strict });
    return Results.Ok();
});

app.MapPost("/api/transfer", async (IAntiforgery antiforgery, HttpContext context) => {
    await antiforgery.ValidateRequestAsync(context); // Throws AntiforgeryValidationException if verification fails
    // Execute business logic
    return Results.Ok();
});
  • Principle: Server generates a random Token, embedded in form hidden field or Cookie. Upon submission, server compares whether the Cookie Token and form/Header Token are consistent. Cross-site requests from attackers cannot read the target website's Cookie content (Same-Origin Policy), so they cannot provide the correct Token.
  • Paired with SameSite=Strict or SameSite=Lax Cookie attributes, it can further prevent browsers from automatically attaching Cookies in cross-site requests.

OWASP Top 10 2021 Comparison Table

RankCategoryKey Description
A01Broken Access ControlRose from A05 in 2017 to #1. Users can access unauthorized functions or data (e.g., manually changing URL to access others' data).
A02Cryptographic FailuresRenamed from "Sensitive Data Exposure" in 2017, now focuses on root causes. Covers insufficient transmission encryption, weak hashing, improper key management.
A03InjectionIncludes SQL Injection, OS Command Injection, LDAP Injection, etc. Dropped from A01 in 2017 to #3.
A04Insecure Design2021 New Addition. Emphasizes architectural flaws in the design phase, not implementation vulnerabilities, representing that security must intervene early in the SDLC.
A05Security MisconfigurationDefault accounts/passwords not changed, unnecessary functions/services enabled, incorrect permission settings, etc.
A06Vulnerable and Outdated ComponentsUsing third-party packages or frameworks with known vulnerabilities, corresponding to supply chain security issues.
A07Identification and Authentication FailuresRenamed from "Broken Authentication" in 2017 to cover broader identity identification flaws.
A08Software and Data Integrity Failures2021 New Addition. Includes CI/CD supply chain attacks, unverified update mechanisms (e.g., hijacked during plugin auto-update).
A09Security Logging and Monitoring FailuresLogs insufficient or unmonitored, leading to inability to detect or trace after an attack occurs.
A10Server-Side Request Forgery (SSRF)2021 New Addition. Server is tricked into issuing requests to internal resources (e.g., cloud metadata endpoints), potentially leaking cloud keys.

2021 Version Main Changes

  • A01 became "Broken Access Control" (previously "Injection").
  • Three new items added: A04 Insecure Design, A08 Integrity Failures, A10 SSRF.
  • "Injection" dropped from #1 to #3, but remains one of the most common attack techniques.

Common Confusion: SQL Injection / Command Injection

Comparison AspectSQL InjectionCommand Injection (OS Command Injection)
Injection TargetSQL database query statementOperating system Shell command
Attack VectorSQL fragments embedded in form fields, URL parameters, HTTP HeadersParameters concatenated to cmd.exe /c, /bin/sh -c, etc., Shell calls
HarmData leakage, data tampering, authentication bypass, even executing system commands via xp_cmdshellExecute arbitrary commands directly with server privileges (RCE)
Root CauseString concatenation of SQL instead of parameterized queriesPassing user input directly into Shell for execution
Main DefenseParameterized Query, ORM, least-privileged database accountAvoid calling Shell; use whitelist to validate input when necessary, do not concatenate command strings
  • SQL Injection variants: Blind SQL Injection (no direct echo, inferred via Boolean conditions or time delays), Second-Order SQL Injection (malicious input stored first and triggered in subsequent queries).
  • Command Injection high-risk APIs in C#: Process.Start() paired with cmd /c or /bin/sh -c, and parameters come from user input.

Supply Chain Attack

Corresponds to OWASP A06 (Vulnerable and Outdated Components) and A08 (Software and Data Integrity Failures). Attackers do not attack the target directly, but infiltrate its dependent upstream components.

Actual Cases

CaseYearTechniqueImpact
SolarWinds (SUNBURST)2020Attacker infiltrated SolarWinds build server, implanted backdoor in Orion software updateUS government agencies and thousands of enterprises affected
Codecov2021Bash Uploader script tampered with, stealing environment variables and keys in CI/CD environmentOpen source projects and enterprise CI environments using Codecov
Log4Shell (CVE-2021-44228)2021JNDI Lookup function in Apache Log4j 2 has RCE vulnerabilityHundreds of millions of Java applications worldwide affected
event-stream (npm)2018After package ownership transferred, new maintainer injected malicious code stealing cryptocurrency walletsnpm package with millions of downloads
3CX2023Supply chain attack chain: first infiltrated upstream vendor Trading Technologies, then infected 3CX desktop application via its softwareHundreds of thousands of enterprise users

Countermeasures

  • Use SCA (Software Composition Analysis) tools to scan for vulnerabilities in dependent packages (e.g., dotnet list package --vulnerable, Snyk, OWASP Dependency-Check).

  • Add package integrity verification to CI/CD Pipeline (e.g., NuGet signature verification, npm package-lock.json hash comparison).

  • Adopt SBOM (Software Bill of Materials) to track versions of all dependent components.

  • Establish internal package mirror repositories (e.g., Artifactory, Azure Artifacts) to avoid pulling directly from public Registries.

  • Principle of Least Privilege: CI/CD environment Secrets authorized only to necessary Pipeline stages.

  • SRI (Subresource Integrity): When referencing JavaScript libraries or CSS from external CDNs, add integrity="sha384-..." attribute to <script> / <link> tags (hash value pre-calculated and hardcoded by the developer). The browser recalculates the hash after download; if it does not match, execution is refused, perfectly preventing scenarios where CDN suppliers are breached or tampered with and malicious scripts are implanted.

    html
    <script src="https://cdn.example.com/jquery.min.js"
            integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
            crossorigin="anonymous"></script>

BOLA / IDOR (API Security)

BOLA (Broken Object Level Authorization) is the #1 item in OWASP API Security Top 10, and is also a concrete manifestation of OWASP Top 10 A01 (Broken Access Control) in the API scenario.

IDOR (Insecure Direct Object Reference) is the most common form of BOLA:

  • Example: User obtains their own order via GET /api/orders/1001, changing 1001 to 1002 allows accessing others' orders.
  • Root cause: Backend only verifies whether the user is logged in, not whether the user is authorized to access the specific resource.

Defense measures:

  • Backend forcibly checks "whether this user is the owner of the resource" for every request.
  • Use unpredictable resource identifiers (e.g., UUID) instead of auto-incrementing IDs (reduces possibility of guessing, but not a fundamental solution).
  • Implement access control policies at the API Gateway layer.

API Security (OWASP API Security Top 10)

OWASP API Security Top 10 (2023 version) lists the most common API security risks:

RankRisk NameDescription
API1Broken Object Level Authorization (BOLA)Not verifying if user is authorized to access specific objects (e.g., modifying ID in /api/users/123 allows accessing others' data)
API2Broken AuthenticationAuthentication mechanism flaws (weak Token, missing Rate Limiting, Token not expired)
API3Broken Object Property Level AuthorizationReturning too many attributes or allowing modification of attributes that should not be modified (Mass Assignment)
API4Unrestricted Resource ConsumptionNot limiting request frequency, return size, or batch quantity of API, can be exploited for DoS attacks
API5Broken Function Level AuthorizationNot verifying if user is authorized to call specific API endpoints (e.g., general user accessing administrator API)
API6Unrestricted Access to Sensitive Business FlowsNot adding protection mechanisms to sensitive business flows (e.g., bulk purchase, ticket grabbing)
API7Server Side Request Forgery (SSRF)API accepts URL parameters and issues server-side requests directly, can be exploited to access internal services
API8Security MisconfigurationMissing security headers, excessive error message disclosure, unnecessary HTTP methods enabled
API9Improper Inventory ManagementNot tracking all API versions and endpoints, old APIs not decommissioned become attack entry points
API10Unsafe Consumption of APIsTrusting data returned by third-party APIs without verification, potentially introducing malicious content
  • BOLA (API1) is the most common API vulnerability, essentially lacking object-level authorization checks.
  • Difference between API Security and Web Application Security: APIs usually have no UI, attackers operate HTTP requests directly, traditional WAF rules (targeted at HTML/Form) may not be able to effectively defend.
  • Defense Recommendations: API Gateway paired with Rate Limiting, OAuth 2.0 + JWT, input validation, least return principle (return only necessary fields).

GraphQL API Specific Security Issues

REST APIs usually map each endpoint to a specific resource, making it easy for attackers to enumerate; GraphQL has only a single endpoint, but the built-in Introspection Query mechanism allows anyone to query the complete Schema structure:

graphql
{ __schema { types { name fields { name } } } }

In the reconnaissance phase of penetration testing, attackers execute Introspection Query to obtain all available queries (Query), mutations (Mutation), type definitions, and field names, equivalent to automatically generating an API attack surface list, significantly shortening the exploration time for subsequent BOLA / injection attacks.

Defense:

  • Disable Introspection in production (enable only in development).
  • Even if Introspection is disabled, implement field-level access control to prevent unauthorized access to sensitive fields.
  • Enable query complexity limits (Query Depth Limiting / Cost Analysis) to prevent DoS caused by recursive queries.

Common Software Vulnerability Exploitation Techniques Comparison Table

TechniqueCategoryPrincipleCountermeasure
Buffer OverflowMemory SafetyWrite data exceeding buffer boundaries, overwriting adjacent memory (including return address), thereby controlling execution flow.Boundary checking, Stack Canary, ASLR, DEP/NX.
Use-After-Free (UAF)Memory SafetyMemory is not cleared after release, attacker allocates new object to occupy the same address, manipulating the new object through the old pointer.Set pointer to null after release, use smart pointers, memory-safe languages (e.g., Rust).
Heap SprayMemory SafetyFill Heap with malicious Shellcode (shellcode, attacker's malicious machine code) and NOP Sled (NOP slide, continuous no-operation instructions, guiding execution flow into Shellcode) to increase the probability of jumping to malicious code, often combined with UAF or overflow.ASLR, limit heap memory allocation upper limit, memory isolation.
Integer OverflowMemory SafetyInteger operation wraps around after exceeding type range, leading to length calculation errors, triggering buffer overflow or logic bypass.Use safe integer libraries, check range before explicit type conversion; C# defaults to unchecked (silent wrap), can use checked block or /checked compiler option to throw OverflowException.
Format String AttackInjectionPass user input directly into printf() and other formatting functions (e.g., printf(user_input)), attacker can read Stack memory or write to arbitrary addresses using %n.Never use external input as format string, fixed use of printf("%s", user_input); enable compiler warnings.
XML External Entity Injection (XXE)InjectionXML parser processes external entity declarations (<!ENTITY xxe SYSTEM "file:///etc/passwd">), reading local files or initiating SSRF requests.Disable external entity parsing (FEATURE_EXTERNAL_GENERAL_ENTITIES = false), use parsers that do not support DTD.
Server-Side Template Injection (SSTI)InjectionUser input embedded directly into template engine (e.g., Jinja2, Thymeleaf) rendering, triggering template syntax to execute arbitrary code (RCE, Remote Code Execution).Input must not be directly concatenated into template strings; template engine sandboxing; distinguish between data and template context.
Insecure DeserializationInjectionWhen deserializing untrusted data, attacker can manipulate Object Graph to trigger specific constructors or callbacks (e.g., PHP's __wakeup, C#'s IDeserializationCallback), leading to RCE or privilege escalation. C#'s BinaryFormatter is a typical high-risk API (.NET 5 marked deprecated, .NET 9 removed).Do not deserialize data from untrusted sources; C# switch to System.Text.Json or XmlSerializer with type whitelist; verify signature of serialized data.
Race Condition / TOCTOU (Time-of-Check to Time-of-Use)Logic & CompetitionTime gap between "check" and "use," attacker replaces resources (e.g., files, symbolic links) during this period, invalidating check results.Use atomic operations; lock after acquiring resource (File Locking); avoid relying on intermediate states that can be modified externally.
Prototype PollutionLogic & CompetitionIn JavaScript, manipulate __proto__ or constructor.prototype to pollute the prototype shared by all objects, thereby injecting malicious attributes affecting global behavior.Create objects without prototypes using Object.create(null); freeze prototypes (Object.freeze(Object.prototype)); input key blacklist filtering (__proto__, constructor).
Side-channel AttackObservation InferenceDo not attack the algorithm directly, but measure physical characteristics (timing, power consumption, electromagnetic radiation, cache hit rate) during system execution to restore secret data. Typical case: Spectre / Meltdown exploits cache timing differences in CPU speculative execution to read cross-process memory.Constant-time algorithms, Retpoline/IBRS, KPTI.
Zero-day ExploitOtherExploit unknown vulnerabilities not yet public or patched by vendors, because there are no available patches, defense side cannot rely on signature detection.Defense in Depth; behavioral detection (EDR / XDR); Principle of Least Privilege; network segmentation to limit lateral movement.
Fileless MalwareOtherMalicious code does not land (not written to disk), executes directly in memory (e.g., via PowerShell, WMI, Living-off-the-Land Binaries). Traditional antivirus relies on file scanning, making it difficult to detect.AMSI (Antimalware Scan Interface): Script engines (PowerShell, VBScript, WMI) pass content to antivirus engine before execution, intercepting decrypted plaintext scripts in memory, the most effective system-level defense against LOLBins; Script Block Logging; behavioral EDR; restrict PowerShell execution policy.
Return-Oriented Programming (ROP)Memory SafetyBecause DEP/NX prevents executing Shellcode, attacker chains existing Gadgets (short code sequences ending in ret), combining them into arbitrary logic, bypassing non-executable restrictions.CFI, Shadow Stack, ASLR make Gadget addresses difficult to predict and control flow impossible to hijack.
DLL Side-LoadingSupply Chain / HijackingExploit Windows DLL search order, place malicious DLL with the same name in the program's directory, causing the program to load the malicious version first upon startup. Often paired with legitimate and digitally signed programs (e.g., antivirus) to evade detection.Enable SafeDllSearchMode (default); load DLLs using absolute paths in code; verify DLL digital signature; application whitelist control.
Buffer Overflow: Stack Memory Layout

When a function is called, the Stack grows from high address to low address. The complete layout with Stack Canary defense is as follows:

High Address
┌──────────────────────────────────┐
│  Return Address                  │  ← Attack target: overwrite to control execution flow
├──────────────────────────────────┤
│  Saved EBP (Caller Base Pointer)  │  ← Overwritten along the way
├──────────────────────────────────┤
│  ★ Stack Canary                  │  ← Defense: random value, verified before return
├──────────────────────────────────┤
│  Local Variables / Buffer        │  ← Write start point, overflow direction ↑
└──────────────────────────────────┘
Low Address (Stack grows downward)

State after overflow:

High Address
┌──────────────────────────────────┐
│  0xdeadbeef (Attacker controlled)│  ← Return address tampered!
├──────────────────────────────────┤
│  AAAAAAAA                        │  ← Saved EBP corrupted
├──────────────────────────────────┤
│  AAAAAAAA                        │  ← ★ Canary overwritten → Detected!
├──────────────────────────────────┤
│  AAAA... (Long input)            │  ← Buffer + overflow
└──────────────────────────────────┘
Low Address

Normal vs Overflow Comparison

Stack Position (High → Low)Normal StateAfter Overflow
Return Address0x00401234 (Legal address)0xdeadbeef (Attacker controlled)
Saved EBPCaller Base PointerAAAAAAAA (Corrupted)
Stack Canary0x7f3a9c01 (Random value)AAAAAAAA (Overwritten → Detected!)
Buffer (16 bytes)Normal dataAAAA... (Long input)

Before the function ret instruction executes, the program verifies if the Canary is intact. If overwritten → immediately terminate the program (__stack_chk_fail), preventing the attacker from controlling the return address.

C# Situation

C# code managed by CLR has automatic boundary checking and cannot suffer from traditional Buffer Overflow. However, when using unsafe blocks to operate on raw pointers, boundary checking is bypassed:

csharp
unsafe void Vulnerable(string input) {
    byte* buffer = stackalloc byte[16];
    fixed (char* p = input) {
        for (int i = 0; i < input.Length; i++)
            buffer[i] = (byte)p[i]; // Overwrites Stack when input > 16 characters!
    }
}

Avoid using unsafe; when high-performance buffer operations are needed, switch to Span<T> or Memory<T> (still has boundary checking).

TOCTOU: Time Gap Between Check and Use

Root cause: Check (①) and Use (③) are not atomic operations, there is a time window that can be exploited.

C# Example

csharp
// Vulnerable: Time gap between File.Exists and File.WriteAllText
if (File.Exists(path)) {
    // Attacker might replace the symbolic link pointed to by path here
    File.WriteAllText(path, data);
}

// Safer: FileMode.CreateNew performs atomic judgment at the kernel level "create only if it doesn't exist"
// Throws IOException if it already exists (including symbolic link target)
using FileStream fs = new(path, FileMode.CreateNew, FileAccess.Write);
using StreamWriter writer = new(fs);
writer.Write(data);

Supplement: In Unix environments, it must be paired with the O_NOFOLLOW flag (called via P/Invoke in C#) to refuse following symbolic links; Windows symbolic links require administrator privileges, risk is relatively low.

Side-channel Attack: Why does "observation" allow attack?

Core premise: Algorithms have subtle differences in execution behavior when processing different data, and these differences leak in measurable physical characteristics.

Example 1 — Timing Attack: Character Comparison

Common incorrect password comparison implementation (C#):

csharp
// Vulnerable: Early return leaks timing information
bool ComparePassword(byte[] stored, byte[] input) {
    if (stored.Length != input.Length) {
        return false;
    }
    for (int i = 0; i < stored.Length; i++) {
        if (stored[i] != input[i]) {
            return false; // Return early upon first mismatch
        }
    }
    return true;
}
  • Input "bXXX" vs correct password "abcd": First character doesn't match, returns immediately → shortest time.
  • Input "aXXX": First character matches, continues to compare the second before returning → slightly longer time.

Attackers can brute-force character by character: whichever input takes the longest time means that character was guessed correctly. No need to guess everything at once.

Why network latency doesn't block the attack: Law of Large Numbers

Network latency is random noise (normal distribution, random fluctuations up and down); CPU execution time difference is a fixed bias (has a fixed factor of "how many bytes match"). Attackers repeat the same request 10,000 to 100,000 times to the same target and take the average; when the sample size is large enough, random latency cancels out, and the underlying fixed time difference emerges from the noise. Even over the public network, it is feasible; all that is needed is a sufficient number of samples.

The correct approach is to use constant-time comparison, .NET provides CryptographicOperations.FixedTimeEquals (.NET Core 2.1+):

csharp
using System.Security.Cryptography;

bool ComparePasswordSafe(byte[] stored, byte[] input) {
    // Regardless of how many characters match, execution time is constant, no information leaked
    return CryptographicOperations.FixedTimeEquals(stored, input);
}

Example 2 — Cache Timing Attack: Spectre

CPU, to improve performance, will "speculatively execute" subsequent instructions before the condition judgment result is out:

// Even if x is out of bounds, CPU may still speculatively execute the following code:
if (x < array.length) {
    y = secret_array[x];           // ① Read secret data into y
    temp = probe_array[y * 4096];  // ② Access different cache lines based on y
}
// Speculation incorrect → result discarded, but cache state retained!

The attacker then times each position of probe_array:

  • Fast access (Cache Hit) → that cache line was accessed by ② → can infer the value of y → can restore secret_array[x].

This allows reading cross-process memory (including OS Kernel data) that the program is theoretically unauthorized to access.

Example 3 — Cloud Co-residency Attack

The nature of cloud provider (AWS, GCP, Azure) "resource sharing pools" allows multiple tenants to share the same physical server. Attackers only need to spend a few dollars to spin up a VM, and there is a probability of being scheduled to the same physical machine as the target server, thereby sharing L3 Cache and memory buses.

At this point, the attacker executes a cache timing attack locally, completely bypassing the noise problem of network latency, observing the target's cache access patterns with nanosecond precision to infer keys or secret data. Spectre and Meltdown's harm is particularly severe in cloud environments precisely because the combination of "shared hardware + speculative execution" provides ideal attack conditions.

Countermeasures

Defense MeasureTargeted Leakage ChannelDescription
Constant-time algorithmsTiming differencesMake the execution time of all inputs exactly the same, fundamentally eliminating timing signals. Preferred solution for cryptographic operations (e.g., CryptographicOperations.FixedTimeEquals).
Jitter / Timing NoiseTiming differencesAdd random waiting time before and after cryptographic operations, so that the attacker's statistical average requires more samples to separate signals. Belongs to a secondary defense layer, cannot replace constant-time algorithms, but can increase attack costs; effect on local side-channel attacks (nanosecond level) is limited.
Retpoline / IBRSSpectre branch speculation
KPTIMeltdown Kernel cache leakage
BinaryFormatter Deserialization RCE Example (C#)

BinaryFormatter deserialization reconstructs the complete object graph and triggers callbacks such as IDeserializationCallback.OnDeserialization(). The following is a simplified illustration; actual exploitation tools (e.g., ysoserial.net) use .NET Framework built-in classes to string together a Gadget Chain, and the server side does not need to have any custom classes at all.

csharp
using System.Diagnostics;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;

// Attacker side: Create malicious object and serialize to bytes
BinaryFormatter formatter = new();
using MemoryStream ms = new();
formatter.Serialize(ms, new Payload());
byte[] maliciousBytes = ms.ToArray(); // Send this string of bytes to the victim side

// Victim side: Call Deserialize directly on external input (Dangerous!)
ms.Position = 0;
formatter.Deserialize(ms); // Triggers OnDeserialization

[Serializable]
class Payload : IDeserializationCallback {
    public void OnDeserialization(object? sender) =>
        Process.Start("calc.exe"); // Automatically executes after deserialization completes
}

Attack path in Web Forms era: ViewState is serialized and sent to Client side. If MAC verification is disabled, or MachineKey is leaked, the attacker can submit a malicious ViewState → server deserializes → triggers Gadget Chain.

Clarification of Common Techniques

  • Side-channel Attack: Does not attack the algorithm itself, but observes physical characteristics like timing, power, cache during execution. Spectre/Meltdown is cache timing leakage of CPU speculative execution.
  • Zero-day vs Fileless: Zero-day means "vulnerability not public"; Fileless means "attack does not land on disk," both can exist simultaneously.
  • For the principles and applicable attack techniques of general defense technologies (ASLR, DEP/NX, Stack Canary, CFI, etc.), see Common Security Defense Techniques Comparison Table.

Social Engineering and Identity Spoofing Attacks

Social Engineering Attack Type Comparison Table

TypeChinese NameTarget ScopeTechnique Characteristics
PhishingPhishingLarge number of unspecified usersSend fake emails or pages impersonating well-known institutions (banks, Google) in bulk.
Spear PhishingSpear PhishingSpecific individuals or organizationsPre-collect target data, create highly personalized fake emails, difficult to identify.
WhalingWhalingHigh-level executives (CEO/CFO)A subset of Spear Phishing, the goal is someone who can authorize large transfers or disclose secrets.
VishingVoice PhishingPhone usersImpersonate customer service, government agencies, request account or verification code via phone.
SmishingSMS PhishingMobile usersSend malicious links via SMS, often disguised as package notifications or winning notifications.
PretextingPretextingSpecific targetsFabricate reasonable scenarios (e.g., claiming to be IT personnel for remote assistance), deceive targets into cooperating to provide information or execute operations.
BaitingBaitingUnspecified targetsExploit human curiosity, leave USB flash drives implanted with malicious programs in public places.

Social Engineering Classification Logic

  • Breadth decreases: Phishing → Spear Phishing → Whaling (the smaller the target, the higher the precision).
  • Media distinction:
    • Email = Phishing / Spear Phishing / Whaling
    • Phone = Vishing
    • SMS = Smishing
    • Physical bait = Baiting

Password Attack Type Comparison Table

Attack TypeEnglishTechnique DescriptionSpeed vs Stealth
Brute ForceBrute ForceTry all possible password combinations for a single account (e.g., a, b, ..., aa, ab, ...)Slowest, easiest to trigger account lockout
Dictionary AttackDictionary AttackTry one by one using a pre-organized list of common passwords (e.g., rockyou.txt)Faster than brute force, but limited by dictionary quality
Credential StuffingCredential StuffingUse leaked account-password pairs (from data breaches on other websites) to try logging into the target website in bulkExploits users' habit of password reuse across sites, success rate approx. 0.1%–2%
Password SprayingPassword SprayingTry a few common passwords (e.g., P@ssw0rd, Company2024!) against many accountsOnly 1–2 attempts per account, deliberately avoids account lockout thresholds
Rainbow Table AttackRainbow Table AttackPre-calculate hash values of many passwords to build a lookup table, compare with target hash to reverse-lookup plaintextSpace-time trade-off, salting (Salt) makes rainbow tables ineffective

Credential Stuffing vs Password Spraying vs Brute Force

  • Brute Force: One account × all passwords → easy to trigger lockout.
  • Password Spraying: All accounts × few passwords → low-frequency probing, evades lockout.
  • Credential Stuffing: Known account-password pairs × multiple websites → exploits password reuse.
  • Defense commonalities: MFA (Multi-Factor Authentication), abnormal login detection, rate limiting.
  • Defense differences: Brute Force can be handled by account lockout; Password Spraying requires global login failure trend analysis; Credential Stuffing requires detecting low-frequency attempts from many different source IPs.
Hydra / Medusa Brute Force Command Examples

Hydra and Medusa are open-source network login brute force tools supporting multiple protocols.

Hydra Example

bash
# SSH Brute Force (single user + password dictionary)
hydra -l admin -P /usr/share/wordlists/rockyou.txt ssh://192.168.1.100

# HTTP POST form brute force
# /login is login path, user=^USER^&pass=^PASS^ are form parameters, "Invalid" is failure response keyword
hydra -l admin -P passwords.txt 192.168.1.100 http-post-form \
  "/login:user=^USER^&pass=^PASS^:Invalid"

# Password Spraying (multiple users + few passwords)
hydra -L users.txt -p 'Company2024!' ssh://192.168.1.100

# FTP Brute Force (user list + password list)
hydra -L users.txt -P passwords.txt ftp://192.168.1.100

# Limit concurrency and wait time (avoid triggering lockout or detection)
hydra -l admin -P passwords.txt -t 4 -W 3 ssh://192.168.1.100

Medusa Example

bash
# SSH Brute Force (syntax similar to Hydra)
medusa -h 192.168.1.100 -u admin -P passwords.txt -M ssh

# Multi-host batch test
medusa -H hosts.txt -U users.txt -P passwords.txt -M ssh
  • -l/-u: Single username; -L/-U: Username list file.
  • -p/-P: Single password / password list file.
  • -t: Hydra concurrent threads (default 16); -W: Wait seconds between attempts.
  • Limited to use in authorized penetration testing environments.

Watering Hole Attack

Attackers do not attack the target directly, but pre-infiltrate websites frequently visited by the target, implanting malicious code (e.g., browser vulnerability exploits), waiting for the target to visit.

  • Origin of name: Concept of lions waiting for prey by a water hole.
  • Characteristics: Indirect attack, combines Reconnaissance (recon target's common websites) and Exploitation (exploit browser/plugin vulnerabilities).
  • Typical scenario: Attacker infiltrates industry forums or supplier websites, implants JavaScript vulnerability exploitation code, targeting visitors from specific industries.
  • Relationship with Drive-by Download: The technical means of Watering Hole Attack is usually Drive-by Download.

Drive-by Download

Users only browse the web (no need to click or download anything), and vulnerabilities in the browser or its plugins are automatically exploited, with malicious programs installed silently in the background.

  • No user interaction required: Unlike phishing attacks, users do not need to click any links or confirm dialog boxes.
  • Common vulnerability exploitation targets: Browser itself, Flash Player (deprecated), Java Applet (deprecated), PDF Reader plugins.
  • Defense: Keep browsers and plugins updated, remove unnecessary plugins, enable browser sandboxing, deploy Web gateway filtering.

Typosquatting

Register domain names with spelling similar to well-known domains, using users' typing errors to direct traffic to malicious websites.

Spoofing TechniqueExample (Target: example.com)
Missing one characterexamle.com
Extra one characterexamplle.com
Adjacent key replacementezample.com (z and x are adjacent)
Homophone/Visual confusionexamp1e.com (number 1 replaces letter l)
TLD replacementexample.org, example.co
  • Package Typosquatting: Upload malicious packages with names similar to well-known packages on package management platforms like npm, PyPI (e.g., coIors spoofing colors), a type of supply chain attack.
  • Defense: DNS monitoring, register common spelling error domains as protection, access important websites via browser bookmarks, use lock files and hash verification for package management.

Business Email Compromise (BEC)

Attackers infiltrate or spoof enterprise executive/partner email accounts to deceive employees into executing transfers, leaking confidential data, or changing payment information.

  • Difference from Whaling: Whaling is "fishing" for executives; BEC is "impersonating" executives to send emails to subordinates.
  • Common techniques:
    • CEO fraud: Impersonate CEO to send emails to the finance department requesting emergency transfers.
    • Supplier invoice fraud: Impersonate supplier to notify of bank account changes.
    • Account infiltration: Directly infiltrate employee mailboxes, send fraudulent emails from real accounts (harder to identify).
  • Defense: Secondary confirmation via phone or in-person for large transfers, enable email verification mechanisms (SPF, DKIM, DMARC), employee security awareness training.

IoT and Embedded System Attacks

IoT Attack Type Comparison Table

Chinese NameEnglish NameCore BehaviorCharacteristics and Identification Focus
Black Hole AttackBlack Hole AttackMalicious node claims to be the best route, but discards all packets passing throughTraffic enters and disappears, no response, like a "black hole" swallowing packets
Wormhole AttackWormhole AttackTwo malicious nodes establish a covert tunnel in the network, making remote nodes mistakenly believe they are neighborsDistorts routing topology, interferes with routing protocols, difficult to detect
Sinkhole AttackSinkhole AttackMalicious node broadcasts false "best routes," attracting massive traffic to flow to itself before deciding whether to discard or tamperSimilar to Black Hole, but decides how to handle after "attracting," wider impact scope
Evil Twin AttackEvil Twin AttackEstablishes a fake hotspot with the same SSID (Service Set Identifier) as a legitimate AP (Access Point), deceiving devices into connectingCommon attack for wireless networks and IoT devices, can perform man-in-the-middle attacks or credential theft

Routing Attack Comparison

  • Black Hole = Swallows packets and discards them directly, simplest.
  • Sinkhole = Swallows packets and can handle them selectively, attacker has stronger control, wider impact.
  • Wormhole = Does not discard packets, but distorts routing topology, making two remote malicious nodes look like neighbors, more covert.

Zigbee / IoT Protocol Security

ProtocolMain ApplicationKnown Risks
ZigbeeSmart home sensors, light control, industrial sensingKeys are transmitted in plaintext in the air when devices join the network, can be intercepted; many manufacturers use industry-known shared keys by default, equivalent to public
BLE (Bluetooth Low Energy)Wearable devices, medical equipment, smart locksOld pairing processes (BLE 4.0/4.1) easily eavesdropped; KNOB attack can force both ends to downgrade encryption strength to extremely low
Z-WaveDoor locks, curtains, appliance controlOld security framework (S0) uses fixed keys, easily replayed or cracked; new version (S2) has improved, but old devices cannot be updated and still have risks

Common Attack Techniques:

  • Key Sniffing: Intercept keys transmitted in the air when Zigbee devices are initially paired.
  • Replay Attack: Record legitimate control packets (e.g., "unlock" command), retransmit later to trigger the same action.
  • Firmware Downgrade Attack: Force devices to install old firmware, returning them to a state containing known vulnerabilities.

Defense:

  • Install Code (Factory Pre-loaded Key): Burn a unique key into the device at the factory, no need to transmit in the air when joining the network, avoiding interception.
  • Disable Default Trust Center Key: Default values are known to the industry; disabling them prevents direct exploitation.
  • Firmware Signature Verification: Ensure devices only accept legitimate firmware signed by the manufacturer, blocking downgrade attacks.

Host Infiltration and Lateral Movement

Container Escape and 4C Security Model

Container Escape attack paths:

PathDescription
Privileged ContainerStart container with full host privileges (--privileged), equivalent to allowing the container to directly manipulate host hardware and system resources, boundary effectively does not exist
Kernel VulnerabilityContainers share the same OS Kernel with the host; kernel vulnerabilities allow programs inside the container to break isolation and gain host control
Mount Host File SystemMount host sensitive directories or Docker control interface (Docker Socket) into the container, equivalent to giving the container a host administrator entry
Dangerous System CapabilitiesRetain high-risk system capabilities not needed by the container (e.g., arbitrary disk mounting, modifying network settings), attackers exploit these capabilities to break isolation

Kubernetes 4C Security Model (from outside to inside):

LevelEnglishChineseSecurity Focus
1CloudCloudCloud account least privilege (IAM, Identity and Access Management), network security groups limit external exposure
2ClusterClusterRole-Based Access Control (RBAC), network policies limit Pod-to-Pod communication, API Server forced authentication
3ContainerContainerExecute as non-administrator, mount read-only root directory, remove unnecessary system capabilities, regular scanning of image vulnerabilities
4CodeCodeSecure coding, dependency package vulnerability scanning (SCA, Software Composition Analysis), secrets not written into images
  • Defense principle: Containers should execute with least privilege, non-administrator identity, read-only root directory, and retain only necessary system capabilities.
  • Container isolation relies on two mechanisms of the OS: Namespaces (isolates the view of processes, network, file system) + cgroups (limits CPU / memory usage).
  • Container and Multi-tenancy Security share "shared infrastructure" risks, defense thinking is similar.

Process Injection and Process Hollowing

Both are techniques for executing malicious code in the memory space of a legitimate process, bypassing whitelist defenses and disguising as normal processes.

TechniquePrincipleCharacteristicsDetection Method
Process InjectionWrite malicious code (Shellcode or DLL) into the memory of a running legitimate process, borrowing the legitimate process's identity to executeLegitimate process (e.g., explorer.exe) memory contains executable segments not belonging to the original programEDR monitors abnormal behavior of "one process performing cross-process writing to another and triggering execution"
Process HollowingCreate a suspended instance of a legitimate process, clear its memory content, implant malicious code, and then resume executionProcess name looks normal, but actual code in memory does not match the content of the executable file on diskEDR compares the actual content loaded in memory by the process with the original executable file on disk

These two techniques work because Windows provides cross-process operation APIs, originally for legitimate purposes like debuggers, monitoring tools, and anti-malware, which attackers borrow to achieve malicious goals. Windows API is divided into two layers:

  • Win32 API: Microsoft's publicly documented high-level interface, exported by kernel32.dll, user32.dll, etc., providing capabilities like cross-process memory writing, creating threads in another process, etc. Win32 API is essentially a wrapper for NTAPI.
  • Native API / NTAPI: Lower-level interface, exported by ntdll.dll, directly corresponding to Windows kernel system calls (Syscall). Most NTAPIs are not publicly documented but can still be called, and some security tools hang hooks at the Win32 layer, so attackers sometimes switch to NTAPI to bypass this layer of detection.

Process Injection Attack Process:

The attack target is a running process (e.g., explorer.exe), with the goal of making the process execute the attacker's code without its knowledge.

  1. Obtain target process operation permissions: Call OpenProcess, pass in the target process's PID, and apply for a Handle from the OS. The Handle is the credential for all subsequent cross-process operations, representing "the OS allows me to operate this process." Sufficient privileges (usually administrator) are required to obtain it.
  2. Configure space in target process memory: Call VirtualAllocEx (Ex means cross-process operation), configure a blank area in the target process's virtual address space, and set it to PAGE_EXECUTE_READWRITE (RWX) attribute, making this area writable and executable. The code is now in the target process's memory but not yet executed.
  3. Write malicious code: Call WriteProcessMemory, write malicious Shellcode directly into the memory address configured in step 2 via the Handle from step 1.
  4. Trigger execution: Call CreateRemoteThread (Win32) or the underlying NtCreateThreadEx (NTAPI), create a new thread within the target process, with the start address pointing to the malicious code just written, attack complete.

Process Hollowing Attack Process:

The difference from process injection is that the attacker does not invade a process already running, but starts a legitimate process themselves and then swaps its content.

  1. Start legitimate process and pause immediately: Call CreateProcess with the CREATE_SUSPENDED flag, starting a system process like notepad.exe or svchost.exe. The process is created but the main thread immediately enters a suspended state, not yet executing a single line of its own code. This step itself is completely legal and triggers no alerts.
  2. Clear legitimate code (hollowing): Call NTAPI's NtUnmapViewOfSection, unmap and clear the original executable code of this suspended process from memory. The process shell (PID, Handle, name) still exists, but the content has been cleared.
  3. Implant malicious code: Call VirtualAllocEx to reconfigure RWX memory in the empty shell, then call WriteProcessMemory to write the malicious Payload.
  4. Tamper with execution entry point: Call GetThreadContext to obtain the CPU register state of the suspended thread. At this point, the thread is stopped in the ntdll startup stub, and the entry point address is stored in the EAX (32-bit) or RCX (64-bit) register as a parameter, not the instruction pointer EIP/RIP. The attacker changes the value of EAX / RCX to the address of the malicious code, then calls SetThreadContext to write it back.
  5. Resume process: Call ResumeThread (Win32) or NtResumeThread (NTAPI) to lift the suspension. As soon as the process starts, it executes the attacker's malicious code, but Task Manager and most tools still show it as the legitimate notepad.exe.

Does Windows monitor these behaviors?

The Windows kernel itself does not block these API calls because they have legitimate uses, but there are several levels of restrictions:

  • Access permission threshold: OpenProcess requires sufficient privileges. Protected processes marked as Protected Process Light (PPL) (e.g., lsass.exe, antivirus engine processes) cannot obtain a writable Handle even by an administrator, directly blocking the injection path.
  • Optional protection mechanisms: Windows 10+ provides Arbitrary Code Guard (ACG), which prohibits dynamically creating or modifying executable memory pages after the process is enabled, directly blocking the VirtualAllocEx configuration of RWX areas. However, ACG must be enabled by the process itself, not a system default.
  • Security tool monitoring: Windows Defender and third-party EDR subscribe to kernel events via ETW (Event Tracing for Windows) to detect abnormal call sequences such as "configuring RWX memory across processes, then immediately writing and creating remote threads." The kernel design allows it, but it generates telemetry data available for analysis, and security tool alert logic is built on top of this.

Target processes: svchost.exe, explorer.exe, notepad.exe, etc., because security tools usually do not deeply inspect these "trusted" processes.

HTTP Request Smuggling

When there is a discrepancy in how the frontend proxy server (e.g., CDN, load balancer) and the backend server parse HTTP request boundaries, attackers can use this to "smuggle" hidden malicious requests in a single connection.

Why is there a discrepancy? HTTP has two ways to indicate request length, and the specification does not mandate which takes precedence when both exist:

  • Content-Length (CL): Directly declares total length, "this package is 100 bytes total"
  • Transfer-Encoding: chunked (TE): Chunked transmission, each chunk preceded by length, ending with 0, "send in batches, until empty batch"

If the frontend and backend each believe one, they will cut at different places for the same request, and the "extra part" remains in the backend buffer, mistakenly identified as the beginning of the next request.

CL.TE Attack Process (most common):

The attacker constructs a request with both CL and TE, hiding malicious content after the TE end marker:

http
POST / HTTP/1.1
Host: victim.com
Content-Length: 44          ← Frontend uses CL: 44 bytes total body, all forwarded to backend
Transfer-Encoding: chunked  ← Backend uses TE: parsed by chunk, stops at 0

# ── Backend parsing range ─────────────────────────────────────────
0                           ← 0-length chunk, backend considers request ended (TE end marker)

# ── Residual buffer: Backend stopped, following content sticks to next request ───────────
GET /admin HTTP/1.1         ← Smuggled content, pollutes next user request
Host: victim.com
# ── Frontend forwarding end (44 bytes total) ──────────────────────────────

Two-step mechanism for Cookie theft:

The response is sent back to the victim rather than the attacker, so the attacker needs to borrow an endpoint that stores data (e.g., comments, search history) as a relay:

Step 1: Attacker smuggles a POST with an unfinished body, targeting the storage endpoint:

http
[Backend buffer residual smuggled by attacker]
POST /post/comment HTTP/1.1
Host: victim.com
Content-Length: 999      ← Deliberately large, backend continues waiting for subsequent input

csrf=xxx&comment=        ← body deliberately truncated, waiting for victim's request to stick in

Step 2: The next user's normal request sticks in and is treated by the backend as part of the POST body:

http
POST /post/comment HTTP/1.1
...

csrf=xxx&comment=GET /home HTTP/1.1  ← Beginning of user request treated as comment content
Host: victim.com
Cookie: session=abc123               ← Cookie stored in comment together

The backend saves the whole segment as a comment, the attacker then sends GET /post/X to check the comment and can read the user's Cookie. The victim's browser receives a non-expected comment submission response.

TE.CL Attack Structure (Frontend uses TE, Backend uses CL, roles swapped):

http
POST / HTTP/1.1
Host: victim.com
Transfer-Encoding: chunked  ← Frontend uses TE: reads all chunks then forwards
Content-Length: 3           ← Backend uses CL: reads only first 3 bytes of body then stops

# ── Backend parsing range (first 3 bytes of body) ──────────────────────────
1a                          ← chunk size header; backend reads "1a\r" totaling 3 bytes then stops
# ── Residual buffer: Backend stopped, following content sticks to next request ──────────
GET /admin HTTP/1.1         ← Smuggled content, pollutes next user request
Host: victim.com

# ── Frontend forwarding end ─────────────────────────────────────────
0                           ← Frontend TE end chunk (backend already stopped reading, does not involve this segment)

After the backend finishes reading the content with CL=3, the remainder in the chunk stays in the buffer, polluting the mechanism is the same as CL.TE, just the roles of frontend and backend are swapped.

TE.TE Attack Structure (Both ends see TE, but one end doesn't understand the obfuscated version):

http
POST / HTTP/1.1
Host: victim.com
Transfer-Encoding: chunked   ← Standard TE header
Transfer-Encoding: xchunked  ← Obfuscated TE header (non-standard; one end identifies, other ignores)

# ── End identifying TE ────────────────────────────────────────
0                            ← This end parses to here: 0-length chunk, request ends
# ── End unable to identify TE: degrades to CL parsing ─────────────────────
# ── Residual buffer: Degradation end parses by CL, following content sticks to next request ────
GET /admin HTTP/1.1          ← Smuggled content, pollutes next request
Host: victim.com

One end fails to identify the obfuscated TE header and ignores it, equivalent to degrading to CL.TE or TE.CL scenarios, the subsequent pollution mechanism is the same.

Attack VariantFrontend ParsingBackend ParsingEffect
CL.TEContent-LengthTransfer-EncodingFrontend sends complete request, backend cuts off early, remainder pollutes next request
TE.CLTransfer-EncodingContent-LengthFrontend cuts off early, backend reads excess content according to total length, remainder pollutes next request
TE.TETE (parses obfuscated header)TE (ignores obfuscated header)Attacker deliberately writes slightly deformed TE header, causing one end to fail to identify and switch to another parsing method

Attack Impact:

  • Hijack other users' requests (steal Cookie, Session Token).
  • Bypass frontend security controls (e.g., WAF, access control).
  • Execute arbitrary requests on the backend.

Defense

The root cause of the vulnerability is the HTTP/1.1 parsing discrepancy in the frontend proxy → backend segment, which must be set at the proxy layer. RFC 7230 stipulates that TE takes precedence over CL when both exist; both ends strictly adhering to this eliminates the discrepancy.

Nginx (1.13.0+ patched TE parsing logic, can add the following settings)

Reject mixed requests, return 400:

nginx
# http {} block
map "$http_content_length:$http_transfer_encoding" $smuggling_risk {
    # Nginx concatenates CL value and TE value in request header with a colon :
    # If both have values, it matches "~.+:.+" (regex: at least one character on both sides), then assigns variable 1 (high risk).
    "~.+:.+" 1;
    default   0;
}

# server {} block
if ($smuggling_risk) {
    return 400 "Bad Request: Multiple Length Indicators";
}

Upgrade to HTTP/2 upstream (fundamental solution, backend must support simultaneously; Kestrel supports by default):

nginx
proxy_http_version 2.0;

Proxy layer normalization (remove TE, backend only looks at CL):

nginx
proxy_set_header Transfer-Encoding "";

HAProxy (1.9+ enabled strict header parsing by default, can add ACL to force rejection of mixed requests):

haproxy
# frontend or listen block
http-request deny if { req.hdr_cnt(content-length) gt 0 } { req.hdr_cnt(transfer-encoding) gt 0 }

ASP.NET Core (Kestrel)

Kestrel itself complies with RFC 7230 and requires no additional settings. If you need to leave warning logs at the Middleware layer as defense in depth:

csharp
app.Use(async (context, next) => {
    var headers = context.Request.Headers;
    if (headers.ContainsKey("Content-Length") && headers.ContainsKey("Transfer-Encoding")) {
        context.Response.StatusCode = 400;
        await context.Response.WriteAsync("Bad Request: Invalid Header Combination");
        return;
    }
    await next();
});

DNS Tunneling

Uses the DNS protocol (usually allowed through firewalls on UDP Port 53) as a covert data transmission channel, encoding non-DNS data in DNS queries and responses.

AspectDescription
PrincipleAttacker sets up their own DNS server, victim host encodes stolen data into subdomains (e.g., dGVzdA.attacker.com), sends via DNS query; attacker's DNS server decodes to obtain data.
PurposeC2 communication (Command & Control), data exfiltration, bypassing Captive Portal (forced login page, e.g., airport / coffee shop Wi-Fi verification wall)
Detection IndicatorsAbnormally long subdomain names, high-frequency DNS queries, abnormal TXT/NULL record query ratio, sudden surge in query volume for a single domain
Common Toolsiodine, dnscat2, dns2tcp
DefenseDNS traffic deep inspection (DPI), restrict internal DNS to forward only to trusted recursive resolvers, monitor DNS query length and frequency anomalies

Cross-Platform Attack Technique Differences (Windows vs Linux)

Privilege Escalation

AspectWindowsLinux
Common TechniquesToken Manipulation (impersonating high-privilege users), UAC Bypass, unquoted service paths (path contains spaces but not quoted, can be hijacked by malicious executables), DLL Side-LoadingSUID/SGID special execution permission abuse, sudo configuration errors (allowing execution of commands that shouldn't be open), kernel vulnerability exploitation, Cron job configuration errors, writable PATH directories
Key DifferencesWindows manages privileges via identity tokens and Access Control Lists (ACLs), attackers often obtain system-level control (SYSTEM) by impersonating high-privilege tokensLinux manages privileges via User ID (UID) and Group ID (GID), programs with special execution bits (SUID) are the most common targets for exploitation
Detection FocusMonitor abnormal use of high-risk system privileges (e.g., debugging, impersonation)Regularly scan for files with abnormal SUID/SGID special permissions

Persistence

AspectWindowsLinux
Common TechniquesRegistry startup items (keys executed automatically on system boot), Scheduled Tasks, Startup Folder, WMI event subscription, service creationcrontab scheduling, Shell initialization scripts (.bashrc / .profile), systemd services, SSH authorized key implantation
Key DifferencesRegistry is a Windows-unique persistence vector, diverse and highly covertLinux persistence mostly relies on Shell initialization scripts or scheduling tools
Detection FocusMonitor changes to startup-related registry keys, scheduled task changesMonitor crontab changes, new systemd service files, SSH authorized key changes

Lateral Movement

AspectWindowsLinux
Common TechniquesPsExec (remote command execution), WMI (Windows Management Instrumentation), RDP (Remote Desktop), WinRM (Windows Remote Management), DCOM (Distributed Component Object Model)SSH, Ansible / Salt configuration management tools, stolen SSH private keys, NFS network shares
Key DifferencesWindows domain environments have many built-in remote management tools, attackers can borrow existing management infrastructure without bringing in external toolsLateral movement in Linux environments mainly relies on SSH; obtaining private keys allows spreading across multiple hosts
Detection FocusMonitor abnormal connections for file sharing (SMB) and Remote Procedure Call (RPC)Monitor abnormal SSH login sources, changes to SSH authorized key files
Linux Privilege Escalation Detection Commands
bash
# Find all SUID files (may be exploited by attackers to execute as root)
find / -perm -4000 -type f 2>/dev/null

# Find all SGID files
find / -perm -2000 -type f 2>/dev/null

# Find both SUID and SGID
find / -perm /6000 -type f 2>/dev/null

# Check sudo configuration (which commands can be executed as root without password)
sudo -l

# Find writable cron directories and scripts
find /etc/cron* -writable -type f 2>/dev/null
ls -la /etc/cron.d/ /var/spool/cron/

# Find world-writable directories (may be used for PATH Hijacking)
find / -writable -type d 2>/dev/null | grep -v proc

# Check if /etc/passwd is writable (rare but fatal)
ls -la /etc/passwd /etc/shadow
  • SUID file lists can be compared with GTFOBins to confirm if there are exploitable paths.
  • Windows counterpart tool: LOLBAS (Living Off The Land Binaries and Scripts).

Credential Theft and Lateral Movement Techniques

Pass-the-Hash (PtH)

Windows NTLM authentication protocol adopts a Challenge-Response mechanism; the server only needs to compare the MD4 hash of the password to complete verification, without needing the plaintext password. After obtaining the hash, the attacker can directly impersonate a legitimate user to log into other hosts without cracking the original password.

NTLM Background

The default authentication protocol before Windows 2000, later replaced by Kerberos, still retained for fallback, used in workgroup environments, local account authentication, and backup scenarios where Kerberos cannot be used.

  • Prerequisite: Obtain NTLM hash from local password database (SAM, Security Account Manager) or memory.
  • Impact: Obtain domain administrator hash, can move laterally throughout the Windows domain.
  • Defense: Disable NTLM authentication (switch to Kerberos), enable Credential Guard (prevents reading hashes in memory), restrict login scope of privileged accounts.

Pass-the-Ticket (PtT)

Kerberos authentication uses tickets instead of passwords. After stealing the ticket, the attacker directly injects it into their own connection, accessing resources as the ticket holder without knowing the password or hash.

  • Difference from PtH: PtH uses NTLM hash; PtT uses Kerberos tickets. After enabling Kerberos, PtH is invalid, attackers switch to PtT.
  • Golden Ticket: After obtaining the password hash of the special account (krbtgt) responsible for signing all tickets in Kerberos, one can forge any user's TGT (Ticket-Granting Ticket), equivalent to obtaining the master key for the entire domain.
  • Silver Ticket: Only forges the service ticket for a single service, scope is more limited than Golden Ticket, but harder to detect because it does not need to request from the Kerberos server.
  • Defense: Regularly reset krbtgt account password (must reset twice consecutively to completely invalidate), monitor tickets with abnormally long validity, deploy Privileged Access Management (PAM).

krbtgt Account and Two-Reset Reason

krbtgt is the built-in service account of the AD domain. KDC uses its password hash to encrypt and sign all TGTs, making it the most sensitive secret in the entire domain. For compatibility, AD retains both the "current password" and the "previous password" of krbtgt, and accepts both when verifying tickets. Therefore, resetting only once leaves the stolen old password as the "previous password," and the forged Golden Ticket remains valid; only after the second reset is the old password completely erased from AD, and the Golden Ticket truly expires. Between the two resets, a period of time must elapse (roughly equal to the default TGT validity of 10 hours) to allow existing legitimate tickets to naturally expire, avoiding normal service interruption.

Kerberoasting

Kerberos allows any domain user who has logged in to request a service ticket from the Key Distribution Center (KDC), and the ticket itself is encrypted with the password hash of the service account. The attacker requests the ticket and takes it offline for brute-force cracking to restore the service account's plaintext password.

  • Why it works: Service accounts (e.g., database service accounts) often do not change passwords for a long time, and password strength is insufficient; any authenticated domain user can request a service ticket (SPN, Service Principal Name), without special privileges.
  • Attack steps: Enumerate accounts with SPN → Request service ticket → Take offline for brute-force cracking.
  • Defense: Service accounts use random passwords of 25+ characters; switch to gMSA (Group Managed Service Account, automatically manages password rotation by Active Directory); monitor abnormal behavior of requesting large numbers of tickets in a short time.

DCSync (AD Replication Credential Theft)

The attacker uses the Active Directory directory replication protocol (MS-DRSR, Directory Replication Service Remote Protocol) to impersonate a Domain Controller (DC) and request account password hashes from other DCs, without needing to log into the DC or execute any programs on the DC.

  • Prerequisite: Attacker account must have "Replicating Directory Changes (DS-Replication-Get-Changes-All)" permission, usually requiring Domain Admin or an account explicitly granted this permission.
  • Typical use: Steal krbtgt account hash, further create Golden Ticket; or batch extract password hashes for the entire domain.
  • Reason for difficulty in detection: Network traffic is no different from normal DC replication, carries no malicious tools, belongs to Living off the Land.
  • Tool: Mimikatz lsadump::dcsync /domain:corp.local /user:krbtgt.
  • Defense: Monitor replication requests from non-DC machines (Windows Event ID 4662); strictly restrict objects granted "DS-Replication-Get-Changes-All"; deploy Privileged Access Management (PAM).

Living off the Land (LOLBins)

Attackers do not carry their own tools, but use legitimate tools already present on the target system to execute malicious operations, because these tools have digital signatures and are built into the system, and will not trigger traditional antivirus software alerts.

PlatformTermCommonly Abused ToolsMalicious Use Example
WindowsLOLBinscertutil.exe, mshta.exe, regsvr32.exe, rundll32.exe, powershell.exeUse certutil to download malicious files, use mshta to execute scripts (these are system built-in tools, antivirus won't block)
LinuxGTFOBinscurl, wget, python, perl, find, vimUse find's SUID to escalate privileges to execute shell, use curl to download malicious scripts and execute directly
  • Why hard to detect: These tools are legitimate (administrators also use them), requiring context (who, when, from where, with what parameters executed) to determine if they are malicious.
  • Relationship with Fileless Malware: Living off the Land is the main means of fileless attacks; malicious code exists only in memory, not landing as a file, making traditional antivirus even harder to detect.
  • Defense: Application Whitelisting (AppLocker / WDAC), PowerShell Constrained Language Mode, Script Block Logging, EDR behavioral detection.
  • Reference resources: LOLBAS Project (Windows), GTFOBins (Linux).

Detection Indicators and Emerging Threats

Indicators of Compromise (IOC) and Indicators of Attack (IOA)

Comparison AspectIOC (Indicator of Compromise)IOA (Indicator of Attack)
NatureForensic evidence left after an attack has occurredBehavioral patterns while an attack is in progress
Time PointPost-incident (Reactive)Real-time (Proactive)
Typical ExampleMalicious file Hash, C2 IP/Domain, malicious Registry Key, abnormal file pathProcess injection behavior, abnormal PowerShell call patterns, account logging into multiple machines laterally in a short time
Corresponding ToolThreat Intelligence Platform (TIP), SIEM matching rules, YARA rulesEDR / XDR behavioral analysis, UEBA (User and Entity Behavior Analytics)
LimitationAttackers can easily change Hash/IP (low-cost avoidance)Requires more mature detection capabilities and baseline establishment
  • IOC lifecycle is short (invalidated every time the attacker changes tools), but establishment cost is low.
  • IOA focuses on "behavior" rather than "characteristics"; even if the attacker changes tools, the behavioral pattern remains similar, and detection effect is more durable.
  • In practice, the two are complementary: IOC is used for quick matching of known threats, IOA is used for detecting unknown or new attacks.
  • IOC is like "fingerprints at a crime scene," IOA is like "suspicious behavior on surveillance."
  • Tactics and techniques described in the MITRE ATT&CK framework are essentially IOAs, which can serve as sources for Threat Hunting hypotheses.

Threat Intelligence and Security Assessment

Threat Hunting

Threat Hunting is a proactive, hypothesis-driven search method where security personnel actively look for potential threats that have not yet triggered alerts in the environment, complementing Incident Response (IR) that passively waits for SIEM alerts.

AspectThreat HuntingIncident Response
Trigger MethodProactive (driven by hypothesis, no alert trigger needed)Passive (triggered by SIEM / IDS alerts)
PurposeFind potential attackers or IOCs not yet detectedRespond to confirmed known incidents
Applicable ScenarioAPT lurking, new attack techniques, detection tool coverage blind spotsContainment, eradication, recovery after a security incident occurs

Threat Hunting Process:

  1. Establish Hypothesis: Based on threat intelligence (CTI), MITRE ATT&CK tactics, or past incidents, propose a hypothesis that "an attacker might be using X technique."
  2. Collect and Search Data: Search for IOCs or IOAs in the hypothesis within EDR, SIEM, and log systems.
  3. Analyze and Identify: Distinguish malicious behavior from normal noise, confirm whether a real threat exists.
  4. Respond and Improve: If a threat is found, convert to Incident Response; if not found, convert the search logic into automated detection rules to continuously strengthen detection capabilities.

Diamond Model

The Diamond Model is a foundational framework for threat intelligence analysis, where every intrusion event can be described by four core dimensions, forming a diamond shape.

DimensionDescriptionExample
AdversaryThe actor launching the attackState-sponsored APT group, cybercrime syndicate
CapabilityTools, malware, vulnerability exploitation techniques used by the attackerCobalt Strike, Log4Shell vulnerability exploitation
InfrastructurePhysical resources controlled or used by the attackerC2 server, compromised relay machine, malicious domain
VictimTarget of the attackSpecific enterprise, critical infrastructure, specific personnel
  • Core assumption of the model: Attackers need Capability via Infrastructure to affect the Victim; breaking any link can disrupt the attack.
  • Complementary to MITRE ATT&CK: The Diamond Model describes "who, using what, via where, attacking whom" (overall relationship), ATT&CK describes "how it is done" (technical details).
  • Community extension: Meta-diamond connects multiple related events, attributing them to the same APT group.

Threat Actor Classification Comparison Table

Actor TypeEnglishMotivationSkill LevelResource ScaleTypical Attack Target
Script KiddieScript KiddieShow-off, curiosityLow (uses off-the-shelf tools)IndividualRandom targets, website defacement
CybercriminalCybercriminalMoneyMedium to HighCriminal groupFinancial institutions, personal data
HacktivistHacktivistPolitical ideology, social justiceMediumOrganization/CollectiveGovernment agencies, enterprise websites
Insider ThreatInsider ThreatRevenge, money, coercionHigh (possesses internal privileges)IndividualEmployer systems and data
Nation-StateNation-StateNational interest, espionageExtremely HighNational resourcesCritical infrastructure, government, military-industrial enterprises
APTAPT (Advanced Persistent Threat)Long-term lurking, intelligence gatheringExtremely HighNational or large organizationHigh-value targets, supply chain

Threat Intelligence Source Classification

TypeDescriptionRepresentative Organization/Resource
Commercial IntelligencePaid subscription threat intelligence services and OSINT toolsCommercial security vendors, CVE, MITRE ATT&CK, VirusTotal
Government IntelligenceThreat notifications released by national-level CERTs and law enforcement agenciesTWCERT/CC, US-CERT, NCSC
Community IntelligenceIndustry ISAC and open-source intelligence sharing platformsFinancial FS-ISAC, Energy E-ISAC, MISP

Threat Intelligence Sharing Classification: TLP 2.0

CISA (Cybersecurity and Infrastructure Security Agency) adopts TLP (Traffic Light Protocol) 2.0 as the standard classification for threat intelligence sharing, using colors to indicate the scope of information dissemination. TLP is not limited to media; it can be applied to emails, reports, and meeting presentations.

LabelDissemination TargetDescription
TLP:REDOriginal recipients onlyMust not be forwarded to non-original recipients, must not exceed the original sharing context (e.g., attendees at the meeting).
TLP:AMBER+STRICTWithin the original recipient's organizationTransmitted within the organization on a need-to-know basis, cannot cross organizations, stricter limits than AMBER.
TLP:AMBEROriginal recipient's organization and direct customersCan be transmitted within the organization on a need-to-know basis, and can be extended to direct business customers.
TLP:GREENSame security communityCan be widely transmitted within the same security community (e.g., ISAC, specific security forums), cannot be publicly released on the Internet.
TLP:CLEARNo restrictionsNo dissemination restrictions, can be shared publicly or released on the Internet.

TLP 2.0 Key Points

  • Restrictions from loose to strict: CLEAR → GREEN → AMBER → AMBER+STRICT → RED.
  • TLP:AMBER vs TLP:AMBER+STRICT: AMBER allows transmission to direct customers, AMBER+STRICT is limited to within the organization.
  • Main difference from 2.0 vs 1.0: 2.0 changed "WHITE" to "CLEAR," added "AMBER+STRICT," and explicitly defined the boundaries of "Community."

Threat Intelligence Standards (STIX / TAXII / CVE / CVSS)

Threat Intelligence Four Classifications (by user level)

The four levels are arranged by abstraction, corresponding to "trend judgment → attack warning → technique analysis → indicator blocking" decision needs from high to low.

TypeEnglishTarget UserTypical Example
Strategic IntelligenceStrategic IntelligenceSenior management, CISO, Board of DirectorsThreat trends, attacker motivation, geopolitical risks
Operational IntelligenceOperational IntelligenceSOC analysts, CSIRTDetails of specific attack activities in progress or imminent, targets
Tactical IntelligenceTactical IntelligenceSecurity architects, red teamsAttacker TTPs, MITRE ATT&CK mapping
Technical IntelligenceTechnical IntelligenceFirewall/SIEM administratorsSpecific IOCs (IP address, malicious Hash, malicious Domain)

💡 Terminology Quick Check

  • CISO: Chief Information Security Officer
  • SOC: Security Operations Center
  • CSIRT: Computer Security Incident Response Team
  • TTPs: Tactics, Techniques, Procedures
  • SIEM: Security Information and Event Management
  • IOC: Indicator of Compromise

STIX / TAXII:

STIX defines the structured format of threat intelligence, TAXII is responsible for transmission, and the two are usually used together, forming the basis for automated exchange of threat intelligence.

ItemEnglishDescription
STIX 2.1Structured Threat Information eXpressionExpresses threat intelligence in JSON format, covering object types such as attack patterns, malicious indicators IOC, attacker organizations, etc.
TAXIITrusted Automated eXchange of Indicator InformationTransmission protocol for STIX data, defines two exchange modes: Collection (pull) and Channel (push)

CVE / CWE / CVSS:

CVE records specific vulnerability events, CWE describes the type of weakness behind the vulnerability, CVSS measures its severity, and the three together form the basis of vulnerability management vocabulary.

ItemEnglishDescription
CVECommon Vulnerabilities and ExposuresVulnerability unique numbering system, format CVE-Year-SequenceNumber (e.g., CVE-2024-12345). Maintained by MITRE, NVD (maintained by NIST) provides search and detailed analysis
CWECommon Weakness EnumerationSoftware weakness classification system maintained by MITRE, describes weakness types (e.g., SQL Injection, Buffer Overflow), not specific vulnerability events
CVSSCommon Vulnerability Scoring SystemVulnerability severity score (0.0–10.0), maintained by FIRST

CVSS v4.0 (November 2023, released by FIRST) is the current version, Base Score is divided into three groups totaling 11 metrics:

Exploitability Metrics

MetricAbbreviationOptions (score high to low)Description
Attack VectorAVNetwork > Adjacent > Local > PhysicalSame as v3.x
Attack ComplexityACLow > HighDifficulty for attacker to actively evade existing defense conditions
Attack RequirementsATNone