ISC2 CC Notes 2 Business Continuty and Disaster Recovery

Domain 2:業務連續性 (Business Continuity) 與災害復原 (Disaster Recovery)

Domain 2 涵蓋了組織在面對中斷事件時如何維持運作以及如何從中恢復。這個領域非常重要,因為它提供了應對災害情境和確保業務生存的架構。

業務連續性計劃 (BCP)

  • 目的: BCP 是組織的 長期戰略計劃,旨在確保在發生中斷事件後能持續運作。它是一個 總括性計劃,包含許多子計劃。BCP 涵蓋了 整個組織,而不僅僅是 IT。
  • 組成部分: BCP 包含了災害情境和復原步驟。它是一個 疊代過程,需要與關鍵員工和顧問一起編寫。
  • 相關計劃: BCP 包含或與其他計劃相關聯:
    • 營運持續計劃 (COOP):詳細說明如何在災害期間維持營運,包括如何安排員工前往備用站點,以及在減少容量下運作最多 30 天所需的所有營運事項。
    • 危機溝通計劃 (Crisis Communications Plan):CMP 的子計劃,說明在危機期間如何進行內部和外部溝通。它指定了誰可以對媒體發言,以及誰可以向內部人員傳達什麼訊息。
    • 網路事件應變計劃 (Cyber Incident Response Plan):說明如何在網路事件(如 DDOS、病毒等)中應對。這可以是 DRP 的一部分,也可以是獨立的計劃。
    • 人員緊急應變計劃 (OEP - Occupant Emergency Plan):說明如何在災害事件中保護設施、員工和環境。這可能包括火災、颶風、洪水、犯罪攻擊、恐怖主義等。它著重於安全和疏散,並詳細說明如何疏散以及員工應接受的訓練。
    • 業務復原計劃 (BRP - Business Recovery Plan):列出了從中斷事件中復原後恢復正常業務營運所需的步驟。這可能包括將營運從備用站點切換回(已修復的)主要站點。
    • 支援持續性計劃 (Continuity of Support Plan):狹隘地關注特定 IT 系統和應用程式的支援。也被稱為 IT 應急計劃 (IT Contingency Plan),強調 IT 而非一般的業務支援。
    • 危機管理計劃 (CMP - The Crisis Management Plan):確保組織管理層在緊急或中斷事件發生時能進行有效的協調。它詳細說明了管理層必須採取的步驟,以確保在災害發生時立即保護人員的生命安全和財產。
  • 高階管理層的角色: 高階管理層必須參與 BCP/DRP 過程的啟動和最終批准。他們對計劃負責並擁有計劃。他們最終負有責任,必須展現 應有的謹慎 (due-care)應有的努力 (due-diligence)。在嚴重的災害中,應由高階管理層或法律部門的人員對媒體發言。他們對優先順序、實施和計劃本身擁有最終決定權。組織應該有 由上而下的 IT 安全文化
  • BCP 步驟: 典型的 BCP 流程包括:BCP 政策 → 業務影響分析 (BIA) → 識別預防性控制 → 制定復原策略 → 制定 DRP → DRP 訓練/測試 → BCP/DRP 維護。

災害復原計劃 (DRP)

  • 目的: DRP 專注於 IT 系統。它回答了在災害情境中如何 足夠快地復原 的問題。
  • DRP 生命週期: DRP 具有一個生命週期,包括 緩解 (Mitigation)準備 (Preparation)應對 (Response)復原 (Recovery)
    • 緩解: 減少災害影響和發生的可能性。
    • 準備: 開發計劃、程序和工具。
      • 復原考量: 評估供應商、承包商和基礎設施的影響。確保資料中心的功能和連接性。
      • 模擬測試: 用於在實際災害發生前找出計劃中的不足之處。
        • DRP 審查 (DRP Review):DRP 團隊成員快速審查計劃,尋找明顯的遺漏或空白部分。
        • 通讀/清單檢查 (Read-Through/Checklist):經理和各功能領域的員工通讀計劃,並檢查復原過程中所需的各項要素清單。
        • 演練/桌面演練 (Walk/Talk-through/Tabletop):一組經理和關鍵人員坐下來討論復原過程。這通常可以暴露可能阻礙復原的漏洞、遺漏或技術不準確性。
        • 模擬測試/演練 (Simulation Test/Walkthrough Drill):團隊模擬一個災害情境,各團隊根據 DRP 做出反應。
      • 實體測試 (Physical Tests)
        • 部分中斷 (Partial Interruption):中斷單一應用程式,並將其故障轉移到備用設施。通常在非上班時間進行。
    • 應對: 在災害發生時快速有效地做出反應。評估警報或發現的事件是否嚴重到可能構成災害。
    • 復原: 將系統恢復到可運行的狀態。

業務影響分析 (BIA)

  • 目的: BIA 是 BCP 的一個組成部分。它用於識別關鍵業務功能及其對中斷的依賴性和影響。
  • 關鍵指標: BIA 幫助定義了幾個關鍵的時間和數據指標,用於確定復原策略和目標:
    • 最大可容忍停機時間 (MTD - Maximum Tolerable Downtime):系統在對組織造成嚴重影響之前可以停運的總時間。MTD 必須大於或等於 RTO + WRT。其他術語包括 MAD、MTO、MAO、MTPoD。請記住,經歷數據嚴重損失的公司,有 43% 永不再營業,29% 在兩年內關閉。
    • 復原時間目標 (RTO - Recovery Time Objective):恢復系統(硬體)所需的時間。RTO 必須在 MTD 限制內。
    • 工作復原時間 (WRT - Work Recovery Time):配置恢復的系統以恢復業務功能所需的時間。
    • 復原點目標 (RPO - Recovery Point Objective):可以容忍丟失的數據量。RPO 必須確保每個系統、功能或活動的最大可容忍數據損失不被超過。
    • 平均故障間隔時間 (MTBF - Mean Time Between Failures):系統在發生故障前的平均運行時間。
    • 平均修復時間 (MTTR - Mean Time to Repair):修復故障系統所需的時間。
    • 最低營運要求 (MOR - Minimum Operating Requirements):關鍵系統運行所需的最低要求。

復原策略 (Recovery Strategies)

根據 MTD,組織可以確定其應對災害的方法和採取的防範措施。主要的復原站點類型包括:

  • 冗餘站點 (Redundant Site):與生產站點完全相同,接收實時數據副本。具備自動故障轉移功能,應地理位置上遠離。這是最昂貴的選項,使用者不會注意到故障轉移。
  • 熱站點 (Hot Site):容納關鍵系統,數據接近實時或實時。通常是較小的完整數據中心,但需手動故障轉移。切換可以在一小時內完成。
  • 溫站點 (Warm Site):基於備份的數據,需要手動故障轉移。切換和恢復需要 4-24 小時或更長時間。通常是一個較小的完整數據中心,但沒有實時或近實時數據。
  • 冷站點 (Cold Site):只提供基礎設施,沒有硬體或備份。這是最便宜但復原時間最長的選項(可能需要數週或更長)。
  • 互惠協議站點 (Reciprocal Agreement Site):與另一組織簽訂合同,在災害發生時互相提供空間。可以是承諾的空間或完全獨立的機櫃。
  • 訂閱/雲站點 (Subscription/Cloud Site):支付外部提供商,根據服務等級協議 (SLA) 提供復原服務。
  • 移動站點 (Mobile Site):輪式數據中心,設備齊全。可能需要電源和網路連接。

事後檢討 (Lessons Learned)

  • 在經歷中斷事件或故障轉移測試後,進行 事後檢討 非常重要。這個階段經常被忽略。
  • 事後檢討應 專注於改進,而不是歸咎責任。
  • 從中獲得的見解應被納入 BCP 和 DRP 的更新中。

計劃維護 (Plan Maintenance)

  • BCP 和 DRP 是 疊代過程,需要 定期更新
  • 每年至少審查和更新一次
  • 取回並銷毀過期版本,分發當前版本。

事件管理 (Incident Management)

  • 目的: 監控和應對安全事件。確保應對是可預測且眾所周知的。
  • 事件類型: 事件可以分為幾類:
    • 自然災害 (Natural):由自然引起,例如地震、洪水、龍捲風、雪災等。
    • 人為事件 (Human):由人類引起。可以是 故意的 (Intentional)(如惡意軟體、恐怖主義、DOS 攻擊、駭客行動主義、釣魚等)或 無意的 (Unintentional)(如錯誤、疏忽、員工使用個人 USB 傳播惡意軟體等)。
    • 環境事件 (Environmental):與自然災害不同。例如停電、硬體故障、環境控制問題(熱、壓力、濕度)等。
  • 其他定義:
    • 事件 (Incident):發生在系統或網路上的多個不利事件,通常由人引起。
    • 問題 (Problem):起因不明的事件,需要進行根本原因分析以防止再次發生。
    • 不便 (Inconvenience):非破壞性故障,如硬碟故障或伺服器集群中一台伺服器宕機。
    • 緊急情況/危機 (Emergency/Crisis):具有潛在生命或財產損失風險的緊急事件。
    • 災害 (Disaster):整個設施在 24 小時或更長時間內無法使用。如果具備地理分散和冗餘,可以極大緩解這種情況。雪災也可以是災害。
    • 浩劫 (Catastrophe):設施被摧毀。
  • 事件管理步驟: 標準的事件管理流程包括:
    • 偵測 (Detection):識別潛在的安全事件。
    • 應對 (Response):採取初步行動來遏制事件。
    • 緩解 (Mitigation):理解並解決事件的根本原因。
    • 報告 (Reporting):記錄事件細節並通知管理層。報告是持續的,從偵測到惡意活動就開始。報告分為技術和非技術兩方面。
    • 復原 (Recovery):將系統恢復到可運行的狀態。
    • 補救 (Remediation):在系統間擴大緩解措施。
    • 事後檢討 (Lessons Learned):分析和改進未來的應對措施。包括 根本原因分析 (Root-Cause Analysis),試圖確定導致事件發生的潛在弱點或漏洞。
  • 網路事件應變小組 (CIRT - Cyber Incident Response Team):通常包括高階管理層、事件經理、技術負責人及團隊、IT 安全人員、公關、人資、法務以及 IT/財務稽核師。

常見的威脅和問題 (Common Threats and Issues)

  • 錯誤和遺漏 (Errors and Omissions - 人為):員工的錯誤,通常影響較小,但可能造成損害。如果這些問題被認為非常常見或具有潛在破壞性,可以建立控制措施來緩解它們。
  • 電氣/電力問題 (Electrical/Power Problems - 環境):停電和電壓波動。需要不斷電系統 (UPS) 和發電機備份。
  • 環境控制 (Environmental Controls):管理資料中心的熱、壓力、濕度以保護硬體。正壓保持外部污染物不進入。濕度應保持在 40% 到 60% 之間,低濕度會產生靜電,高濕度會腐蝕金屬(電子設備)。
  • 戰爭、恐怖主義和蓄意破壞 (Warfare, Terrorism, and Sabotage - 人為):除了傳統衝突外,還有許多發生在網路上的活動,駭客攻擊為了各種原因(國家、宗教等)。

希望這個詳細的解釋對您有所幫助!

ISC2 CC Notes 1 Security Principles

CIA 被認為是資訊安全領域非常重要的基礎。其核心內容構成了其他知識網域的基礎。此網域涵蓋了資訊安全IT 安全網路安全之間的區別、CIA 三要素IAAA隱私風險管理存取控制道德規範治理與管理以及法律與規範

資訊安全IT 安全網路安全之間的區別:

  • 資訊安全 (Information Security):保護所有類型的資訊,包括紙本文件和語音資料。
  • IT 安全 (IT Security):保護硬體軟體資料,例如電腦和網路系統。
  • 網路安全 (Cybersecurity):特別保護可透過網際網路存取的 IT 系統。

CIA 三要素:機密性、完整性和可用性。

  • (Confidentiality)
  • (Integrity)
  • (Availability)
  • 機密性 (Confidentiality):這是許多人對 IT 安全的主要理解。我們保持資料和機密資訊的秘密。確保未經授權的人員無法存取資料。威脅包括揭露 (Disclosure),即未經授權地存取資訊。為實現機密性,我們使用:
    • 加密:用於靜態資料(例如 AES256、全磁碟加密) 和傳輸中的資料(例如安全的傳輸加密協定 SSL、TLS)。
    • 存取控制
  • 完整性 (Integrity):保護資料和系統免受修改。確保資料未被更改。威脅包括變更 (Alteration),即未經授權地更改資料。為實現完整性,我們使用:
    • 密碼學
    • 檢查碼 (Check sums),例如 CRC。
    • 訊息摘要 (Message Digests),也稱為雜湊 (hash),例如 MD5、SHA1 或 SHA2。
    • 數位簽章:提供不可否認性 (Non-repudiation)。
    • 存取控制
    • 補丁管理
  • 可用性 (Availability):確保被授權的人員在需要時可以存取所需的資料和系統。威脅包括惡意攻擊(如 DDOS、物理攻擊、系統入侵、員工攻擊)、應用程式故障(程式碼錯誤)和組件故障(硬體故障)。未能提供可用性可能導致銷毀 (Destruction),即資料或系統被破壞或無法存取。為實現可用性,我們使用:
    • IPS/IDS (入侵防禦系統/入侵偵測系統)。
    • 補丁管理
    • 冗餘 (Redundancy):在硬體電源(多個電源供應器、UPS、發電機)、磁碟(RAID)、流量路徑(網路設計)、HVAC、人員、HA (高可用性) 等方面實現。
    • SLA (服務級別協議):定義所需的正常執行時間(例如 99.9%)。

DAD (Disclosure, Alteration, and Destruction) 是 CIA 三要素的對立面。

IAAA:識別 (Identification)、身份驗證 (Authentication)、授權 (Authorization) 和當責 (Accountability)。

  • 識別:宣告身份。範例包括使用者名稱、ID 號碼或員工社會安全碼。「我是Elliot」即為識別的範例。
  • 身份驗證證明身份。「證明你是Elliot」。應該始終使用多因素身份驗證。身份驗證因素通常分為三種類型:
    • 類型 1:知識因素 (Knowledge factors):你知道的東西。例如密碼、通行碼、PIN 碼。這是最常用的身份驗證形式。組織應執行密碼政策來增強安全性:建議最低長度為 8 個字元,包含大小寫字母、數f字和符號,定期更新且不重複使用(例如記住最後 24 個密碼,最長使用期 90 天,最短使用期 2 天)。金鑰延展 (Key stretching) 可延遲密碼驗證,阻礙暴力破解。
    • 類型 2:持有因素 (Possession factors):你擁有的東西。例如 ID、智慧卡、權杖、電腦上的 Cookie。
    • 類型 3:生物特徵因素 (Biometric factors):你是誰。例如指紋、虹膜掃描、面部幾何形狀。生物特徵可以是生理性(如指紋)或行為性(如打字節奏)。生物特徵驗證涉及錯誤率:
      • FRR (False Rejection Rate):錯誤拒絕授權使用者。
      • FAR (False Acceptance Rate):錯誤接受未授權使用者。
      • CER (Crossover Error Rate):FRR 和 FAR 達到最佳平衡的點。
    • 使用生物特徵驗證需要考慮隱私問題(揭示個人資訊)和安全風險(可能被偽造)。生物特徵資料一旦洩漏,比密碼更難替換。
  • 授權:決定主體被允許存取什麼。我們使用存取控制模型來實現授權。主要原則包括最小權限 (Least Privilege) 和需要知道 (Need to Know)。
    • 最小權限:員工或系統只獲得其角色所需的最低必要存取權限。不多也不少。
    • 需要知道:即使你有存取權限,如果你的工作不需要知道該資訊,你就不應該存取它。
    • 職責分離 (Separation of Duties):將單一任務分配給多個不同的人員來執行,以防止詐欺和錯誤。在小型組織不切實際時,應實施補償性控制
    • 存取控制模型包括:
      • DAC (Discretionary Access Control):通常用於可用性最重要時。存取權限由物件擁有者自行決定。擁有者可以使用 DACL (Discretionary ACL) 授予或撤銷權限。大多數作業系統廣泛使用此模型。
      • MAC (Mandatory Access Control):通常用於機密性最重要時。存取權限基於標籤許可等級。物件被分配標籤,主體的許可等級必須支配物件的標籤。這常應用於軍事或高度重視機密性的組織。標籤可以比「最高機密」更細緻,例如「最高機密 – 核子」。許可等級基於對主體當前和未來能力的正式決定。
      • RBAC (Role-Based Access Control):通常用於完整性。存取權限基於使用者的角色和權限。這有助於簡化使用者管理。例如,薪資部門的員工獲得薪資相關存取權限,調到 HR 部門後則獲得 HR 相關權限。
      • ABAC (Attribute-Based Access Control):存取權限基於主體、物件和環境的屬性和條件。屬性可以包括主體(姓名、角色、ID、許可等級)、物件(姓名、擁有者、建立日期)和環境(位置、時間、威脅級別)。此模型預計未來幾年內在大型企業中廣泛採用。它也被稱為策略導向存取控制 (PBAC) 或聲明導向存取控制 (CBAC)。
      • 情境相關存取控制 (Context-Based Access Control):存取權限基於特定的情境參數,例如位置、時間、回應順序或存取歷史。範例包括要求 CAPTCHA 回應、基於 MAC 位址過濾無線存取,或防火牆基於封包分析過濾資料。
      • 內容相關存取控制 (Content-Based Access Control):存取權限基於物件的屬性或內容。例如,應用程式中隱藏或顯示選單、資料庫中的檢視或對機密資訊的存取權限都屬於此類。
  • 當責 :確保行動可追溯到執行者。提供不可否認性,確保使用者不能否認自己執行了某項特定行動。透過稽核軌跡 (Audit Trails) 記錄行動。

隱私 (Privacy):

  • 定義:免受觀察或干擾的自由。保護個人免受未經授權的侵入。
  • 權利:隱私是一種人權。包括保護個人身份資訊 (PII)。
  • 規範
    • 美國:法律零散,覆蓋範圍不一致。
    • 歐盟:對資料收集、使用和儲存有嚴格規定。
    • GDPR (General Data Protection Regulation):歐盟的資料保護和隱私法規。適用於所有處理歐盟/歐洲經濟區個人資料的組織,無論其位於何處。GDPR 對於違反者有嚴格的罰款。GDPR 賦予個人多項權利,包括存取權(資料控制者必須免費提供個人資料副本)、被遺忘權(資料刪除權)、資料可攜性(以電子格式獲取資料)。要求在資料洩露發生後 72 小時內通知使用者和資料控制者。強調設計內建隱私 (Privacy by Design),在設計資料處理過程時,應確保個人資料的安全,並確保只收集完成任務絕對必要的資料。要求某些公司任命資料保護官 (Data protection officers)。

風險管理 (Risk Management):

  • 風險管理生命週期是迭代的過程。包括識別評估回應與緩解監控階段。
  • 風險公式
    • 風險 = 威脅 * 弱點(或可能性)。
    • 我們也可以使用風險 = 威脅 * 弱點 * 影響
    • 總風險 (Total Risk) = 威脅 * 弱點 * 資產價值 (Asset Value)。
    • 殘餘風險 (Residual Risk) = 總風險 – 對策 (Countermeasures)。
  • 組成部分
    • 威脅 (Threat):可能導致損害的事件。
    • 弱點 (Vulnerability):允許威脅利用並造成損害的弱點。
    • 資產價值 (Asset Value, AV):資產的價值。
    • 應盡職責 (Due Diligence, DD):實施前的研究。例如,在實施安全措施之前進行研究。縮寫為「Do Detect」。
    • 應盡關懷 (Due Care, DC):實施安全措施。是應盡職責的實施。縮寫為「Do Correct」。
  • 風險評估
    • 定性分析 (Qualitative Analysis):評估風險的可能性和影響。通常使用風險矩陣將風險分類(如低、中、高、極高)。
    • 定量分析 (Quantitative Analysis):基於成本的風險評估。涉及計算損失預期。
      • 暴露因素 (Exposure Factor, EF):資產損失的百分比。
      • 單一損失預期 (Single Loss Expectancy, SLE) = AV x EF:事件發生一次的成本。例如,遺失價值 $1000 的筆記型電腦(AV)包含 $10000 的 PII(AV),如果遺失是 100%(EF),則 SLE = ($1000 + $10000) x 100% = $11000。來源使用不同的 AV 範例,僅為筆記型電腦價值 $1000 加上 PII 損失 $10000。
      • 年度發生率 (Annual Rate of Occurrence, ARO):事件每年發生的頻率。例如,組織每年遺失 25 台筆記型電腦。
      • 年度損失預期 (Annualized Loss Expectancy, ALE) = SLE x ARO:如果不採取任何措施,每年預期的成本。例如,ALE = $11000 x 25 = $275000。
      • 總擁有成本 (Total Cost of Ownership, TCO):緩解措施的總成本(前期成本 + 持續成本)。
  • 風險回應 (Risk Responses):處理風險的選項。
    • 接受風險 (Accept the Risk):知道風險存在,但緩解成本高於風險成本(通常用於低風險)。
    • 緩解風險 (Mitigate the Risk, Reduction):實施控制措施將風險降低到可接受的水準。剩餘的風險即為殘餘風險。例如,對筆記型電腦實施加密或遠端清除。
    • 轉移風險 (Transfer the Risk):將風險轉移給第三方,例如透過購買保險。
    • 避免風險 (Risk Avoidance):修改計劃或活動以完全避免風險。例如,不向員工發放筆記型電腦(如果可能),或在不會發生水災的區域建造資料中心。
    • 拒絕風險 (Risk Rejection):知道風險存在但選擇忽視它。這是不可接受的回應策略。
  • 監控與報告:風險管理是一個持續的過程。需要持續監控風險和已實施的控制措施。可以利用主要風險指標 (KRI) 和主要績效指標 (KPI)。通常每年進行風險管理生命週期評估,並對關鍵項目進行週期外評估。

存取控制類別類型

  • 存取控制類別:控制如何保護資產或資源。
    • 行政控制 (Administrative Controls, Directive Controls):透過政策、程序和訓練來管理安全性。例如組織政策、規範、訓練和意識。
    • 技術控制 (Technical Controls, Logical Controls):透過硬體、軟體或韌體來實施安全性。例如防火牆、路由器和加密。
    • 實體控制 (Physical Controls):透過實體措施限制或監控存取。例如鎖、圍籬、警衛、門和防撞柱。
  • 存取控制類型:描述控制的功能或目的。同一控制可能屬於多種類型。
    • 預防性控制 (Preventative Controls):在行動發生之前阻止它們。例如最小權限、藥物測試、IPS、防火牆、加密。
    • 偵測性控制 (Detective Controls):在事件發生期間或之後識別行動。例如 IDS、CCTV、警報、防毒軟體。
    • 矯正性控制 (Corrective Controls):在事件發生後修復問題。例如防毒軟體、補丁、IPS。
    • 復原性控制 (Recovery Controls):在事件發生後協助復原。例如災難復原環境、備份、高可用性環境。
    • 嚇阻性控制 (Deterrent Controls):阻礙行動。例如圍籬、保全警衛、狗、燈光、「小心惡犬」標誌。
    • 補償性控制 (Compensating Controls):在無法實施主要控制或成本過高時提供替代方案。

道德規範

  • ISC2 道德規範:保護社會、正直誠信地行動、提供專業服務、推進專業發展。
  • 電腦道德:不使用電腦傷害他人、不干涉他人電腦工作、不偷窺他人檔案、不使用電腦偷竊、不使用盜版軟體、不使用他人電腦資源而未經授權或補償、不侵佔他人智慧財產、思考所寫程式或設計系統的社會影響、永遠以確保考慮和尊重他人權利的方式使用電腦。
  • 組織道德:需要了解並遵守自己組織內部的道德規範。

治理管理

  • 治理 (Governance):設定目標、監控績效、定義風險容忍度。由高階主管負責。
  • 管理 (Management):規劃和執行活動以達成治理設定的目標。在治理設定的方向內運作。
  • C 級主管 (C-Level Executives) 對於安全負有最終責任。需要了解的 C 級主管包括 CEO、CIO、CTO、CSO、CISO 和 CFO。

法律與規範

  • 法律類型
    • 刑事法 (Criminal Law):目的是懲罰並嚇阻對社會有害的行為。「社會」是受害者。證明標準為「排除合理懷疑」。刑罰可能包括監禁、死刑或罰款。
    • 民法 (Civil Law, Tort Law):目的是賠償受害者。個人、團體或組織是受害者。證明標準為「大多數證據」。賠償通常是財務罰款。
    • 行政法 (Administrative Law):政府制定的規範。例如 HIPAA。
    • 私人規範 (Private Regulations):合約要求。例如 PCI-DSS。
    • 習慣法 (Customary Law):基於傳統。
    • 宗教法 (Religious Law):基於信仰。
  • 關鍵規範
    • HIPAA (Health Insurance Portability and Accountability Act):美國關於健康資訊隱私的法律。
    • ECPA (Electronic Communications Privacy Act):保護電子通訊。
    • PATRIOT Act (Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act):擴大執法機構的能力。
    • CFAA (Computer Fraud and Abuse Act):起訴電腦犯罪。
    • GDPR:如前所述,歐盟的資料保護法規。

資訊安全治理:價值觀、願景、使命和計劃。

  • 治理原則包括:
    • 價值觀:我們的價值是什麼?包括道德、原則和信念。
    • 願景:我們渴望成為什麼?代表希望和抱負。
    • 使命:我們為誰服務?代表動機和目的。
    • 策略目標:我們將如何進步?包括計劃、目標和排序。
    • 行動與 KPI:我們需要做什麼以及如何知道我們是否達成目標?包括行動、資源、結果、負責人和時間框架。
  • 安全治理文件類型:
    • 政策 (Policies):強制性高層次且非特定。可能包含「補丁、更新、強加密」等,但不指定具體的作業系統、加密類型或供應商技術。
    • 標準 (Standards):強制性,描述特定技術的使用。例如,所有筆記型電腦必須是 W10、64 位元、8GB 記憶體等。
    • 指引 (Guidelines):非強制性,提供建議或酌情處理的方式。
    • 程序 (Procedures):強制性低層次的逐步指南,非常特定。會指定具體的作業系統、加密類型或供應商技術。

以上是根據來源資料對網域 1 內容的詳細概述。

HikariCP case study 5 CopyOnWriteArrayList

Code Snapshot: Connection Borrowing Logic

Here’s a key piece of HikariCP internals when a thread tries to borrow a connection from the pool:

1
2
3
4
5
6
7
8
// ②
// Get a connection from the pool, with a timeout
final PoolEntry poolEntry = connectionBag.borrow(timeout, MILLISECONDS);

// The borrow method returns null only if it times out
if (poolEntry == null) {
break; // We timed out... break and throw exception
}

This code attempts to borrow a connection from the internal connectionBag. If it doesn’t succeed within the specified timeout, it returns null, and the calling code exits the loop and throws an exception.

Behind the Scenes: What’s connectionBag?

The connectionBag is a custom concurrent structure used by HikariCP to manage connections. Internally, it uses a CopyOnWriteArrayList to store available PoolEntry objects.

Why Use CopyOnWriteArrayList?

CopyOnWriteArrayList is a thread-safe variant of ArrayList where all mutative operations (like add, remove) are implemented by making a fresh copy of the underlying array. It shines in situations where:

  • Reads are far more frequent than writes.
  • Thread safety is critical, but locking overhead must be minimized.

This fits HikariCP’s use case perfectly—connections are borrowed and returned frequently under high concurrency, and most operations are reads (checking for available connections).

What Happens During borrow()?

The borrow() method performs the following steps:

  1. Iterates over the CopyOnWriteArrayList of available connections.
  2. Tries to atomically claim one via compareAndSet.
  3. If no connection is immediately available, it waits until:
    • A connection is returned.
    • The timeout expires.

Thanks to CopyOnWriteArrayList, multiple threads can safely iterate and borrow connections without the risk of ConcurrentModificationException or complex locking strategies.

Timeout Behavior

If no connection is available within the timeout window:

1
2
3
if (poolEntry == null) {
break; // We timed out... break and throw exception
}

The system recognizes that it’s better to fail fast than to block indefinitely. This ensures predictability and avoids resource starvation under load.

Trade-offs of CopyOnWriteArrayList

While CopyOnWriteArrayList is great for safe, lock-free reads, it does have drawbacks:

  • Writes (adds/removes) are costly since the array is copied.
  • It’s not ideal if the list is modified very frequently.

In HikariCP’s case, connection availability doesn’t change every millisecond—so this trade-off is acceptable and even advantageous.

Takeaways

  • CopyOnWriteArrayList plays a crucial role in enabling fast, concurrent access to connection entries in HikariCP.
  • It ensures safety and performance without heavyweight synchronization.
  • The timeout logic provides a safety net to prevent system hangs under high load.

Final Thoughts

This case study shows how a seemingly simple collection choice—like CopyOnWriteArrayList—can dramatically influence the performance and reliability of a high-throughput system like HikariCP. It’s a perfect example of using the right tool for the job in a multithreaded environment.

HikariCP case study 4 FAUX_LOCK

HikariCP Case Study: Understanding FAUX_LOCK

HikariCP, a high-performance JDBC connection pool, is renowned for its minimalist design and efficient concurrency handling. One of its clever optimizations is the FAUX_LOCK, a no-op (no operation) implementation of the SuspendResumeLock class. In this short case study, we’ll explore the purpose of FAUX_LOCK, its implementation, and how it leverages JIT (Just-In-Time) compilation to boost performance.

What is FAUX_LOCK?

The SuspendResumeLock class in HikariCP manages the suspension and resumption of connection acquisition, typically during pool maintenance or shutdown. The FAUX_LOCK is a static instance of SuspendResumeLock that overrides its methods—acquire, release, suspend, and resume—to do nothing:

1
2
3
4
5
6
7
8
9
10
public static final SuspendResumeLock FAUX_LOCK = new SuspendResumeLock(false) {
@Override
public void acquire() {}
@Override
public void release() {}
@Override
public void suspend() {}
@Override
public void resume() {}
};

This “fake” lock acts as a placeholder when actual locking is unnecessary, minimizing overhead in high-performance scenarios.

Why Use FAUX_LOCK?

HikariCP is designed for speed, and every cycle matters in high-throughput applications. The FAUX_LOCK is used when the pool is configured to operate without suspension or locking, specifically when allowPoolSuspension is false (the default). Its key purposes are:

  1. Single-Threaded or Non-Suspended Pools: When pool suspension is disabled, there’s no need for lock operations. FAUX_LOCK eliminates synchronization overhead.
  2. Simplified Code Path: Using FAUX_LOCK avoids conditional logic to check whether locking is needed, maintaining a consistent SuspendResumeLock interface.
  3. Performance Optimization: By providing empty method implementations, FAUX_LOCK reduces the cost of lock operations to zero.

JIT Optimization: The Hidden Benefit

So, what’s the real advantage of this approach? When pool suspension is disabled, FAUX_LOCK provides an empty implementation, with the expectation that the JVM’s Just-In-Time (JIT) compiler will optimize it away. Each call to acquire, release, suspend, or resume is an empty method that does nothing. After the code runs multiple times, the JIT compiler may recognize these methods as no-ops and inline or eliminate them entirely.

This means that, over time, the overhead of calling these methods disappears. When acquiring a connection, the application skips the token acquisition step entirely, as the JIT-optimized code bypasses the empty method calls. This results in significant performance savings, especially in high-concurrency scenarios where connection acquisition is frequent.

When is FAUX_LOCK Used?

FAUX_LOCK is employed when allowPoolSuspension is false. In this mode, HikariCP does not support suspending the pool for tasks like shrinking or reaping idle connections. By using FAUX_LOCK, calls to lock-related methods become no-ops, allowing HikariCP to focus solely on connection management. For example, in a web application with a fixed pool size and no need for suspension, FAUX_LOCK ensures minimal overhead.

Benefits of FAUX_LOCK

  • Zero Overhead: Empty methods eliminate lock-related costs, and JIT optimization may remove them entirely.
  • Code Simplicity: A consistent SuspendResumeLock interface avoids complex branching logic.
  • Flexibility: Supports both high-performance (with FAUX_LOCK) and maintenance-friendly modes (with a real lock).
  • Performance Boost: JIT-eliminated method calls reduce connection acquisition time.

Considerations

FAUX_LOCK is ideal for performance-critical applications but unsuitable when pool suspension is needed (e.g., for dynamic resizing). Enabling allowPoolSuspension requires a real SuspendResumeLock, and misconfiguration could disrupt pool maintenance.

Conclusion

The FAUX_LOCK in HikariCP is a brilliant optimization that showcases how small design choices can yield big performance gains. By providing a no-op lock and leveraging JIT compilation to eliminate method call overhead, FAUX_LOCK ensures HikariCP remains blazingly fast in non-suspended pools. For developers, this underscores the importance of aligning HikariCP’s configuration with application requirements to unlock its full potential.

When configuring your HikariCP pool, check if allowPoolSuspension is necessary. If not, FAUX_LOCK and JIT optimization will work behind the scenes to make your application faster and more efficient.


HikariCP case study 3 getConnection Semaphore

HikariCP Case Study: Understanding the getConnection Semaphore

One of its key mechanisms for managing connections efficiently is the use of a Semaphore in the getConnection method. In this case study, we’ll dive into how HikariCP leverages Semaphore to manage database connections, ensuring thread safety and optimal resource utilization.

Background on HikariCP

HikariCP is a JDBC connection pool designed for speed and simplicity. Unlike traditional connection pools that may rely on heavy synchronization or complex locking mechanisms, HikariCP uses modern concurrency utilities from Java’s java.util.concurrent package, such as ConcurrentBag and Semaphore, to achieve low-latency connection management.

The getConnection method is the primary entry point for applications to acquire a database connection from the pool. This method must balance speed, thread safety, and resource constraints, especially under high concurrency. The use of a Semaphore in this context is critical to controlling access to the finite number of connections.

The Role of Semaphore in getConnection

In HikariCP, a Semaphore is used to limit the number of threads that can simultaneously attempt to acquire a connection from the pool. A Semaphore is a concurrency primitive that maintains a set of permits. Threads must acquire a permit to proceed, and if no permits are available, they block until one is released.

Here’s how HikariCP employs a Semaphore in the getConnection process:

  1. Connection Acquisition Limit: The Semaphore is initialized with a number of permits corresponding to the maximum pool size (maximumPoolSize). This ensures that no more than the configured number of connections are ever allocated.

  2. Thread Safety: When a thread calls getConnection, it must first acquire a permit from the Semaphore. This prevents excessive threads from overwhelming the pool or attempting to create new connections beyond the pool’s capacity.

  3. Timeout Handling: HikariCP’s getConnection method supports a timeout parameter (connectionTimeout). If a thread cannot acquire a permit within this timeout, the Semaphore’s tryAcquire method fails, and HikariCP throws a SQLException, informing the application that no connection is available.

  4. Efficient Resource Management: Once a connection is acquired or created, the thread proceeds to use it. After the connection is returned to the pool (via close), the permit is released back to the Semaphore, allowing another thread to acquire a connection.

This approach ensures that HikariCP remains both thread-safe and efficient, avoiding the overhead of traditional locking mechanisms like synchronized blocks.

Case Study: High-Concurrency Scenario

Let’s consider a real-world scenario where a web application handles thousands of concurrent requests, each requiring a database connection. Without proper concurrency control, the application could exhaust the database’s connection limit, leading to errors or crashes. Here’s how HikariCP’s Semaphore-based getConnection handles this:

Setup

  • HikariCP Configuration:
    • maximumPoolSize: 20
    • connectionTimeout: 30000ms (30 seconds)
    • minimumIdle: 5
  • Application: A Java-based REST API using Spring Boot, handling 1000 concurrent requests.
  • Database: PostgreSQL with a maximum of 100 connections.

Observations

  1. Initial State: The pool starts with 5 idle connections (as per minimumIdle). The Semaphore has 20 permits available, corresponding to maximumPoolSize.

  2. Spike in Requests: When 1000 requests hit the API simultaneously, each thread calls getConnection. The Semaphore ensures that only 20 threads can proceed at a time. Other threads wait for permits to become available.

  3. Connection Reuse: As threads complete their database operations and return connections to the pool, permits are released. Waiting threads acquire these permits and reuse existing connections, preventing the need to create new ones unnecessarily.

  4. Timeout Behavior: If the pool is fully utilized and no connections are available within 30 seconds, threads that cannot acquire a permit receive a SQLException. This allows the application to gracefully handle overload scenarios, perhaps by retrying or returning an error to the client.

Results

  • Stability: The Semaphore prevented the pool from exceeding 20 connections, avoiding overwhelming the PostgreSQL server.
  • Performance: Connection reuse and efficient concurrency control minimized latency, with most requests served within milliseconds.
  • Error Handling: Threads that timed out received clear exceptions, allowing the application to implement fallback logic.

Code Example

Below is a simplified view of how HikariCP’s getConnection logic might look, focusing on the Semaphore usage:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import java.sql.Connection;
import java.sql.SQLException;
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;

public class HikariPool {
private final Semaphore connectionSemaphore;
private final int maxPoolSize;
private final long connectionTimeout;

public HikariPool(int maxPoolSize, long connectionTimeoutMs) {
this.maxPoolSize = maxPoolSize;
this.connectionTimeout = connectionTimeoutMs;
this.connectionSemaphore = new Semaphore(maxPoolSize, true);
}

public Connection getConnection() throws SQLException {
try {
// Attempt to acquire a permit within the timeout
if (!connectionSemaphore.tryAcquire(connectionTimeout, TimeUnit.MILLISECONDS)) {
throw new SQLException("Connection timeout after " + connectionTimeout + "ms");
}
// Logic to acquire or create a connection from the pool
return acquireConnection();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new SQLException("Interrupted while waiting for connection", e);
} finally {
// Release the permit back to the semaphore after returning the connection
connectionSemaphore.release();
}
}

private Connection acquireConnection() {
// Placeholder for actual connection acquisition logic
return null;
}
}

This example illustrates the Semaphore’s role in controlling access to the connection pool. In the actual HikariCP implementation, additional optimizations like the ConcurrentBag for connection storage and housekeeping threads for pool maintenance further enhance performance.

Advantages of Using Semaphore

  • Lightweight Concurrency: Compared to traditional locks, Semaphore provides a more flexible and lightweight mechanism for controlling access.
  • Fairness: HikariCP’s Semaphore is configured to be fair, ensuring that threads are served in the order they request permits, reducing starvation.
  • Timeout Support: The ability to specify a timeout for permit acquisition aligns with HikariCP’s focus on predictable behavior under load.
  • Scalability: The Semaphore scales well under high concurrency, allowing HikariCP to handle thousands of requests efficiently.

Challenges and Considerations

While the Semaphore-based approach is highly effective, there are some considerations:

  1. Configuration Tuning: The maximumPoolSize and connectionTimeout must be carefully tuned based on the application’s workload and the database’s capacity. Setting maximumPoolSize too high can overwhelm the database, while setting it too low can lead to timeouts.

  2. Timeout Handling: Applications must be prepared to handle SQLExceptions caused by timeouts, possibly with retry logic or user-friendly error messages.

  3. Monitoring: Under high load, monitoring the pool’s metrics (e.g., active connections, wait time) is crucial to detect bottlenecks or misconfigurations.

Conclusion

HikariCP’s use of a Semaphore in the getConnection method is a brilliant example of leveraging Java’s concurrency utilities to build a high-performance connection pool. By limiting concurrent access to connections, enforcing timeouts, and ensuring thread safety, the Semaphore enables HikariCP to deliver reliable and efficient database access in demanding environments.

For developers and architects, understanding this mechanism provides valuable insights into designing scalable systems. Properly configuring HikariCP and monitoring its behavior can make the difference between a sluggish application and one that performs flawlessly under pressure.

If you’re using HikariCP in your projects, take the time to review your pool configuration and consider how the Semaphore-based concurrency control impacts your application’s performance. With the right setup, HikariCP can be a game-changer for your database-driven applications.


HikariCP case study 2 HikariPool Initialization

HikariCP Source Code Analysis: HikariPool Initialization

HikariCP is a high-performance JDBC connection pool framework, and one of its core components is the HikariPool class. This article dives into the initialization process of HikariPool, focusing on the following line of code:

1
pool = fastPathPool = new HikariPool(this);

This line appears in the initialization flow of HikariDataSource or related configuration logic, serving as a critical step in creating the HikariCP connection pool. Below, we’ll analyze its meaning, context, and implementation details from the source code perspective.


1. Context: Background of HikariPool Creation

In HikariCP, HikariPool is the core class responsible for managing database connections, including their creation, recycling, borrowing, and destruction. When an application starts and configures a HikariDataSource, HikariCP initializes a HikariPool instance based on the provided configuration.

The line of code in question typically appears in the initialization logic of HikariDataSource, such as:

1
2
3
4
5
private void initializePool() {
if (pool == null) {
pool = fastPathPool = new HikariPool(this);
}
}

Here, pool and fastPathPool are member variables of HikariDataSource, both pointing to the same HikariPool instance. Let’s break down what this code does.


2. Code Analysis: pool = fastPathPool = new HikariPool(this)

2.1 Key Components

  • pool: A member variable in HikariDataSource that stores the HikariPool instance. It serves as the primary entry point for interacting with the connection pool.
  • fastPathPool: Another member variable pointing to the same HikariPool instance. The name fastPathPool suggests a potential performance optimization (more on this- new HikariPool(this): Creates a new HikariPool instance, passing the current HikariDataSource (or its configuration object) as a parameter to the HikariPool constructor.
  • this: Refers to the HikariDataSource or its related configuration object (e.g., HikariConfig), used to pass configuration details to the pool.

2.2 Why Two Variables?

Assigning the same HikariPool instance to both pool and fastPathPool may seem redundant, but it reflects a design choice for flexibility:

  • pool: Acts as the primary reference to the connection pool, used in most scenarios.
  • fastPathPool: Indicates a potential performance-optimized path (fast path). While fastPathPool currently points to the same object as pool, this design allows HikariCP to potentially switch to a more optimized pool implementation in specific scenarios without altering the external interface.

This approach provides HikariCP with the flexibility to evolve its internal implementation while maintaining compatibility.


3. HikariPool Constructor Analysis

To understand what new HikariPool(this) does, let’s examine the HikariPool constructor (simplified version):

1
2
3
4
5
6
7
8
9
10
11
public HikariPool(final HikariConfig config) {
super(config);
this.connectionTimeout = config.getConnectionTimeout();
this.validationTimeout = config.getValidationTimeout();
this.maxLifetime = config.getMaxLifetime();
this.idleTimeout = config.getIdleTimeout();
this.leakDetectionThreshold = config.getLeakDetectionThreshold();
this.poolName = config.getPoolName();
// Initialize other properties...
initializeConnections();
}

3.1 Main Tasks of the Constructor

  1. Inheritance and Configuration Setup:

    • HikariPool extends PoolBase, which handles foundational operations like creating and closing connections.
    • The constructor takes a HikariConfig object, extracts configuration parameters (e.g., maximum pool size, minimum idle connections, connection timeout), and assigns them to HikariPool member variables.
  2. Connection Pool Initialization:

    • Calls initializeConnections() to create the initial set of database connections and populate the pool.
    • Starts background threads (e.g., HouseKeeper) to periodically check connection health, recycle idle connections, and perform other maintenance tasks.
  3. Performance Optimization:

    • Uses efficient data structures like ConcurrentBag to manage connections, ensuring high concurrency and low-latency operations for borrowing and returning connections.

3.2 Role of the this Parameter

The this parameter (typically HikariDataSource or HikariConfig) provides the configuration details, such as:

  • Database URL, username, and password
  • Maximum pool size (maximumPoolSize)
  • Minimum idle connections (minimumIdle)
  • Connection timeout (connectionTimeout)
  • Advanced settings (e.g., connection validation query, leak detection)

HikariPool uses these settings to determine how to initialize and manage connections.


4. Potential Role of fastPathPool

Although fastPathPool currently points to the same object as pool, its naming and design suggest performance optimization possibilities. Here are some speculations and insights:

  • Fast Path Optimization: HikariCP might intend to use a specialized pool implementation in certain scenarios, potentially skipping checks (e.g., connection validation) for better performance.
  • Dynamic Switching: The existence of fastPathPool allows HikariCP to dynamically switch to a more efficient pool implementation based on runtime conditions or configuration.
  • Backward Compatibility: By maintaining both pool and fastPathPool, HikariCP can introduce new pool implementations without breaking existing code.

While fastPathPool’s full potential is not yet utilized, its design leaves room for future enhancements.


5. Conclusion

The line pool = fastPathPool = new HikariPool(this); is a pivotal part of HikariCP’s connection pool initialization. It creates a HikariPool instance and assigns it to both pool and fastPathPool, setting up the core component for managing database connections. The HikariPool constructor handles configuration parsing, pool initialization, and background maintenance tasks.

This code reflects HikariCP’s key strengths:

  • High Performance: Efficient data structures and optimized logic ensure low latency and high throughput.
  • Flexibility: The fastPathPool design allows for future performance enhancements.
  • Simplicity: The initialization logic is clear and maintainable.

By analyzing this code, we gain insight into HikariCP’s connection pool creation process and appreciate its forward-thinking design. For those interested in diving deeper, exploring components like ConcurrentBag or HouseKeeper in the HikariCP source code can reveal even more about its robust implementation.

HikariCP case study 1 Thread Safety

HikariCP case study 1 Thread Safety

HikariDataSource is a high-performance JDBC connection pooling library widely used in Java applications to manage database connections efficiently. This case study explores a critical aspect of HikariDataSource’s implementation: thread safety, focusing on how it ensures consistent behavior in high-concurrency environments.

Thread Safety in HikariDataSource

A key piece of code in HikariDataSource prevents the use of the connection pool after it has been closed:

1
2
3
if (isClosed()) {
throw new SQLException("HikariDataSource " + this + " has been closed.");
}

This code checks whether the connection pool is closed. If isClosed() returns true, it throws an exception to prevent further operations. While this appears to be a simple check, it reveals important design considerations for thread safety.

The isClosed() Method

The isClosed() method is implemented as:

1
return isShutdown.get();

Here, isShutdown is a field defined as:

1
private final AtomicBoolean isShutdown = new AtomicBoolean();

The use of AtomicBoolean ensures that the isShutdown state is thread-safe, meaning its value remains consistent across multiple threads, even in high-concurrency scenarios. Java’s Atomic classes, such as AtomicBoolean, AtomicInteger, and AtomicLong, provide atomic operations that guarantee thread safety without explicit synchronization.

This design ensures that when the connection pool is closed, all threads can reliably detect this state, preventing race conditions or inconsistent behavior.

Why Thread Safety Matters

To understand why AtomicBoolean is necessary, we need to explore the root cause of thread safety issues.

Modern CPUs have multiple levels of caching: L1, L2, and L3 caches, which are exclusive to each CPU core, and main memory, which is shared across all cores. When a CPU core performs a computation, it loads data from main memory into its L1 cache for faster access. However, this caching mechanism can lead to inconsistencies across cores.

For example, if one thread updates the isShutdown value on one CPU core, that update may remain in the core’s L1 cache and not immediately propagate to other cores. As a result, other threads running on different cores might read an outdated value of isShutdown, leading to thread-unsafe behavior.

How AtomicBoolean Ensures Thread Safety

AtomicBoolean addresses this issue through the use of a volatile field:

1
private volatile int value;

The value field stores the boolean state (0 for false, 1 for true). The volatile keyword plays a crucial role in ensuring thread safety by enforcing the following:

  1. Write Synchronization: When a thread modifies the value, the change is immediately written to main memory, bypassing the CPU cache.
  2. Read Synchronization: When a thread reads the value, it always fetches the latest value from main memory, not from the CPU cache.

This ensures that all threads see a consistent value for isShutdown, regardless of which CPU core they are running on.

The Trade-Off of volatile

While volatile guarantees thread safety, it comes with a performance cost. Reading from and writing to main memory is significantly slower than accessing CPU caches. Therefore, using volatile introduces latency, which can impact performance in high-throughput systems.

This trade-off highlights an important lesson: volatile should only be used when thread safety is critical. In cases where a state variable is rarely updated or does not require real-time consistency, a non-volatile field might suffice to avoid the performance overhead.

Lessons from HikariCP’s Source Code

HikariCP’s use of AtomicBoolean demonstrates a careful consideration of thread safety in a high-performance system. However, this is just one example of the library’s low-level optimizations. Other aspects of HikariCP’s design include:

  • Bytecode Size Control: HikariCP minimizes bytecode size to improve JVM optimization and reduce overhead.
  • Concurrency Patterns: HikariCP employs advanced concurrency techniques, similar to those found in frameworks like Disruptor, which is known for its CPU cache-aware design and exceptional performance.

These optimizations show how understanding low-level details, such as CPU caching and memory synchronization, can lead to more efficient code. For developers, studying frameworks like HikariCP and Disruptor offers valuable insights into writing high-performance applications.

Takeaways

Reading HikariCP’s source code can feel like a deep dive into computer science fundamentals, from CPU caches to JVM optimizations. It serves as a reminder that the abstractions we use in high-level programming are built on intricate low-level mechanisms. As developers, investing time in understanding these details can help us write better, more efficient code.

Reflecting on this, I can’t help but think: All those naps I took in university lectures on operating systems and computer architecture? It’s time to pay them back by diving into the source code!

By learning from frameworks like HikariCP, we can bridge the gap between high-level programming and low-level optimizations, ultimately becoming better engineers.

Updating Hexo and Icarus Theme to Latest Version

Updating Hexo and Icarus Theme to Latest Version

Recently, I decided to upgrade my blog system from Hexo 6.3.0 to the latest 7.3.0 version, along with updating the Icarus theme. In this article, I’ll share the entire update process, including the challenges encountered and their solutions.

Pre-Update Versions

  • Hexo Core Version: 6.3.0
  • Hexo CLI Version: 4.3.0
  • Icarus Theme Version: 5.1.0

Update Steps

1. Update Hexo CLI

First, we need to update Hexo CLI to the latest version:

1
npm install -g hexo-cli@latest

2. Update Hexo Core

Next, update the local Hexo core:

1
npm install hexo@latest --save

3. Fix Security Vulnerabilities

During the update process, several security vulnerabilities were detected and needed to be fixed:

1
2
npm audit fix
npm audit fix --force

4. Update Other Dependencies

Update other related dependencies:

1
npm update --save

5. Handle Styling Issues

We encountered a styling rendering issue related to bulma-stylus. Here’s how we resolved it:

  1. Remove bulma-stylus:
1
npm uninstall bulma-stylus
  1. Install bulma:
1
npm install bulma --save

6. Update Icarus Theme

Finally, update the Icarus theme to the latest version:

1
2
npm uninstall hexo-theme-icarus
npm install hexo-theme-icarus@latest --save

7. Regenerate the Site

After completing all updates, clean and regenerate the site:

1
2
hexo clean
hexo generate

Post-Update Versions

  • Hexo Core Version: 7.3.0
  • Hexo CLI Version: 4.3.2
  • Icarus Theme Version: Latest

Issues Encountered and Solutions

Issue 1: Style Rendering Error

During the update process, we encountered the following error:

1
2
ERROR Asset render failed: css/default.css
Error: Unexpected type inside the list.

This error was caused by version incompatibility between bulma-stylus and the new version of Hexo. We resolved it by removing bulma-stylus and installing bulma instead.

Issue 2: Security Vulnerabilities

During the update process, several security vulnerabilities were detected:

  • 2 Low-risk vulnerabilities
  • 8 Medium-risk vulnerabilities
  • 6 High-risk vulnerabilities
  • 2 Critical vulnerabilities

These were fixed by running npm audit fix and npm audit fix --force.

Conclusion

The update process went relatively smoothly overall. While we encountered some minor issues, they were all properly resolved. The updated blog system is now running more stably and has addressed known security vulnerabilities.

If you’re planning to update your Hexo blog, I recommend following these steps and ensuring you have backups of important data during the update process.

References

Understanding the JavaScript Event Loop

JavaScript is a single-threaded language, which means that it can only do one thing at a time. However, it is still able to handle multiple tasks at once through the use of the event loop.

The event loop is a mechanism that allows JavaScript to run asynchronous code while still processing other code. It works by constantly checking the call stack for any pending function calls, and then executing them one by one. If a function call takes too long to complete, it gets deferred to the back of the queue and is processed later.

Browser Important Concepts

The entire runtime environment of a browser is not composed solely of the JavaScript engine. Because the language features of JS belong to a single thread, but in order to allow web pages to have functions similar to “listening for events”, “timing”, and “pulling third-party APIs”, the browser provides other parts to achieve these functions, which are:

  • Event Queue
  • Web API
  • Event Table
  • Event Loop

For the browser’s runtime environment, there are other important components in addition to the JavaScript engine, and these components work together to enable the browser to provide rich functionality and handle multiple tasks.

Among them, Event Queue, Web API, Event Table, and Event Loop are important components in the browser that can work together to handle asynchronous operations and event handlers.

Event Queue
The Event Queue is a FIFO data structure that stores events waiting to be processed. When an event occurs, it is added to the event queue and waits for processing. The event queue can store various events, such as user operation responses, timer events, network requests, and more.

Web API
Web API is a set of APIs provided by the browser for handling asynchronous operations, such as network requests, timers, local storage, and more. Web APIs are usually implemented in native code provided by the browser and are separate from the JavaScript engine. This means that when we call a Web API, the JavaScript engine delegates the task to the Web API and returns immediately without waiting for the task to complete.

Event Table
The Event Table is a data structure that stores event handlers. When an event occurs, the browser looks up the event table to determine which event handlers should be executed. The event table is usually implemented in native code provided by the browser.

Event Loop
The event loop is an infinite loop that listens to the event queue and calls the corresponding event handler. When there are events in the event queue, the event loop retrieves them and calls the corresponding event handler. The main function of the event loop is to ensure that the JavaScript engine can keep running when handling asynchronous operations, without blocking other operations in the browser.

The Call Stack

The call stack is a data structure that keeps track of the functions that are currently being executed. Whenever a function is called, it is added to the top of the call stack. When the function completes, it is removed from the stack, and the next function in line is executed.

1
2
3
4
5
6
7
8
9
10
11
function multiply(a, b) {
return a * b;
}

function add(a, b) {
let result = a + b;
result = multiply(result, result);
return result;
}

console.log(add(2, 3)); // output: 25

In the code above, the add function calls the multiply function, which in turn returns a value that is used in the add function. The call stack keeps track of the order of execution and ensures that the code runs in the correct order.

Asynchronous Code

Asynchronous code is code that runs outside of the normal call stack. This can include things like user input, network requests, and timers. When asynchronous code is executed, it is added to a separate queue known as the event queue.

1
2
3
4
5
6
7
console.log('Start');

setTimeout(() => {
console.log('Timeout');
}, 0);

console.log('End');

In the code above, the setTimeout function is used to create a timer that will run after 0 milliseconds. Despite the short delay, the function is not executed immediately. Instead, it is added to the event queue and will be executed once the call stack is empty.

The Event Loop

The event loop is responsible for monitoring both the call stack and the event queue. When the call stack is empty, the event loop takes the first function in the event queue and adds it to the call stack. This function is then executed, and any resulting functions are added to the back of the event queue.

1
2
3
4
5
6
7
8
9
10
11
console.log('Start');

setTimeout(() => {
console.log('Timeout');
}, 0);

Promise.resolve().then(() => {
console.log('Promise');
});

console.log('End');

In the code above, a Promise is used to create another asynchronous task. Despite being created after the setTimeout function, the Promise is executed first because it is added to the microtask queue, which has a higher priority than the event queue.

Conclusion

The JavaScript event loop is a powerful mechanism that allows asynchronous code to be executed without blocking the main thread. By understanding how the call stack, event queue, and event loop work together, you can write more efficient and responsive code. Remember to use asynchronous code whenever possible, and always be mindful of how your code will affect the event loop.

Java Concurrent basic notes

In Java, “async” and “sync” refer to different ways of executing code and handling concurrency.

Synchronous code is executed in a single thread, with each statement being executed in sequence. When a statement is executed, the program waits for it to finish before moving on to the next statement. This can be useful when you need to ensure that certain code is executed in a specific order, but it can be inefficient if the code is doing something that takes a long time to complete, as the program will be blocked until the code finishes.

Asynchronous code, on the other hand, allows multiple tasks to be executed at the same time. Instead of waiting for a task to finish before moving on to the next one, asynchronous code can start a task and then move on to the next one, while the first task is still running in the background. This can be much more efficient, as the program can continue doing other things while waiting for long-running tasks to complete.

In Java, you can write asynchronous code using the CompletableFuture class, which provides a way to execute tasks in the background and then handle the results when they are ready. CompletableFuture allows you to chain together multiple tasks and specify how they should be executed, such as in sequence or in parallel.

To summarize, synchronous code executes one statement at a time in sequence, while asynchronous code allows multiple tasks to be executed in parallel, improving performance and efficiency.

CompletableFuture is a class introduced in Java 8 that provides a way to write asynchronous, non-blocking code. It is a powerful tool for handling complex asynchronous operations in a clear and concise manner.

CompletableFuture is a type of Future that represents a computation that may or may not have completed yet. It can be used to execute a task in the background and then handle the result when it becomes available, or to execute multiple tasks concurrently and then combine the results when they are all ready.

Here are some of the key features of CompletableFuture:

  1. Chaining: CompletableFuture allows you to chain together multiple asynchronous operations, so that one operation starts when the previous one finishes. This can be done using methods like thenApply(), thenCompose(), and thenCombine().

  2. Combining: CompletableFuture also allows you to combine multiple asynchronous operations into a single operation, using methods like allOf() and anyOf().

  3. Error handling: CompletableFuture provides methods for handling errors that may occur during the execution of an asynchronous operation, including exceptionally() and handle().

  4. Timeout handling: CompletableFuture allows you to set a timeout for an asynchronous operation, using methods like completeOnTimeout() and orTimeout().

  5. Asynchronous execution: CompletableFuture can execute tasks asynchronously on a separate thread, allowing the calling thread to continue with other tasks while the background task is executing.

  6. Completion stages: CompletableFuture provides a way to break down complex asynchronous operations into smaller, more manageable stages, using methods like thenApplyAsync(), thenComposeAsync(), and thenAcceptAsync().

Overall, CompletableFuture provides a flexible and powerful way to write non-blocking, asynchronous code in Java, making it easier to handle complex operations and improve performance.