Replace timer-based TCSleep (which allocated a KTIMER and waited on it) with an
implementation that calls KeDelayExecutionThread. This removes dynamic allocation
and kernel timer usage to simplify the code and reduce resource overhead.
Adds an IRQL <= APC_LEVEL assertion and documents the requirement.
This is safe because TCSleep is always called from code that runs at PASSIVE_LEVEL
Added InterlockedDecrement in the error path when GetPoolBuffer fails for EncryptedIoRequest to ensure accurate tracking of pending IO requests and prevent potential resource leaks.
Major changes:
- Added pooled + elastic work item model with retry/backoff (MAX_WI_RETRIES). removed semaphore usage.
- Introduced two completion threads to reduce contention and latency under heavy IO.
- Added BytesCompleted (per IRP) and ActualBytes (per fragment) for correct short read/write accounting. total read/write stats now reflect real transferred bytes instead of requested length.
- Moved decryption of read fragments into IO thread. completion threads now only finalize IRPs (reduces race window and simplifies flow).
- Deferred final IRP completion via FinalizeOriginalIrp to avoid inline IoCompleteRequest re-entrancy. added safe OOM inline fallback.
- Implemented work item pool drain & orderly shutdown (ActiveWorkItems + NoActiveWorkItemsEvent) with robust stop protocol.
- Replaced semaphore-based work item acquisition with spin lock + free list + event (WorkItemAvailableEvent). added exponential backoff for transient exhaustion.
- Added elastic (on-demand) work item allocation with pool vs dynamic origin tracking (FromPool).
- Added FreeCompletionWorkItemPool() for symmetric cleanup; ensured all threads are explicitly awakened during stop.
- Added second completion thread replacing single CompletionThread.
- Hardened UpdateBuffer: fixed parameter name typo, added bounds/overflow checks using IntSafe (ULongLongAdd), validated Count, guarded sector end computation.
- Fixed GPT/system region write protection logic to pass correct length instead of end offset.
- Ensured ASSERTs use fragment‑relative bounds (cast + length) and avoided mixed 64/32 comparisons.
- Added MAX_WI_RETRIES constant. added WiRetryCount field in EncryptedIoRequest.
- Ensured RemoveLock is released only after all queue/accounting updates (OnItemCompleted).
- Reset/read-ahead logic preserved. read-ahead trigger now based on actual completion & zero pending fragment count.
- General refactoring, clearer separation of concerns (TryAcquireCompletionWorkItem / FinalizeOriginalIrp / HandleCompleteOriginalIrp).
Safety / correctness improvements:
- Accurate short read handling (STATUS_END_OF_FILE with true byte count).
- Eliminated risk of double free or premature RemoveLock release on completion paths.
- Prevented potential overflow in sector end arithmetic.
- Reduced contention and potential deadlock scenarios present with previous semaphore wait path.
- Document HIGH_LEVEL constraints and rationale for pre-building a nonpaged scratch MDL.
- Allocate contiguous scratch buffer with conservative PFN cap (0x7FFFFFFFFFF) and fall back to unlimited cap if needed.
- Replace ASSERT with TC_BUG_CHECK for validation of write MDL mapping at HIGH_LEVEL.
- Safely copy PFNs from prebuilt MDL into caller MDL: compute dst/src page counts, check capacity, copy exact PFNs and retarget MDL header fields (preserve MdlFlags).
- Make DumpData cleanup defensive in unload path.
- comments improvements for clarity and maintainability.
* Prefer allocations to be non-executable
* Remove and reimplement DDIs inappropriately called inside HIGH_LEVEL IRQL routines
* Refactor hibernate context to be passed around in the passed FILTER_EXTENSION pointer rather than global
Column width was updated before SlotListCtrl had the slots added,
which caused the column width to be incorrect before the first time
OnTimer ran to update it. Changing the order ensures the column width
is correct on program launch. Also ensure that we do not autosize
column to fit empty content.