Monday, January 15, 2024

Windows: no easy way to view a list of arbitrary file paths

problem: You have a list of arbitrary file paths and no easy way to view them

Let's say you're doing a file search on a cifs/smb share to find files with certain attributes or naming patterns.

You'd like to be able to view or preview the the file paths efficiently one by one, but the file paths are arbitrary. This means:

  1. The files are not all in the same directory.
  2. Some files may be in the same directory AND we want to ignore other files in such a case.

impact: this makes viewing the files difficult

Windows does not have native functionality to support viewing such a list of files.

solution: make a loop with cygstart

Cygwin can help here with its cygstart command.

# define pause function
function pause() { read -n1 -p"$@" </dev/tty; }
# export pause function
export -f pause

while IFS= read -r line; do printf "%q\n" "$line"; done < ../path/to/list_of_file_paths.txt | tr \\n \\0| xargs -I{} -0 -n1 -- sh -c 'echo "$1"; pause "press any key to cygstart the file, or CTRL+C to abort"; cygstart "$1"' cygstart_loop {}

# 💡 the .txt file is expected to contain file paths one per line, if some file paths contain a line break this logic needs to be updated to handle such a scenario.

# 👆 breakdown on the above loop
# 1. read the input .txt file line by line - expects file paths one per line

# 2. prints lines with %q format, which makes the output safe/escaped for shell input

# 3. translates newlines to zero/null bytes - assumes that no filename contains a newline
note: this may not be strictly necessary because we have used %q format BUT it does
explicitly document that we are working with records separated by new lines,
converting the new lines to zero/null bytes, and xargs is running in -0 mode.
xargs -0 removes any ambiguity in the record separator and mitigates file path special characters causing interpretation issues.

# 4. use xargs to run a simple sh script to prompt the user if they would like to cygstart the given file.
the user is shown the interpreted file path with the chance to abort.
  if the user does not abort - cygstart will attempt to open the file path using the file types default program for the file type.
note: this method should be mitigate command injection exploits, see: https://unix.stackexchange.com/a/156010/19406
note: cygstart_loop is the name of the ad-hoc sh script

citation:

Props to:
Stéphane Chazelas @ Stack Exchange / Unix & Linux / their detailed answer on using shell input safely.

Thursday, September 7, 2023

Secure defaults for sshd_config including Multi-Factor-Authentication (MFA)

I posted a gist documenting secure defaults for sshd_config which includes Multi-Factor-Authentication (MFA). The configuration strategy aims to mitigate various attacks and exploits, disables password auth, and forces users to use MFA. 

You can find the gist here: https://gist.github.com/kyle0r/eb6b9e16ad6366ffa9692169906f128a

Sunday, June 18, 2023

Minecraft Windows 10 edition (app) - edit player name for offline multiplayer

problem: you wish to change your local multiplayer player name

In the Windows 10 edition (not Java), there doesn't seem to be an obvious, straightforward way to change the player name for offline LAN multiplayer.

impact: you cannot change your player name

solution: edit options.txt file

When you run these steps, a txt file will open and you can edit the  mp_playername option to your desired player name for local LAN multiplayer games.

Tested on Minecraft version 1.19.81.

The steps are as follows:

  1. Close Minecraft if open.
  2. Select and copy CTRL+C this path to the options.txt file:
    "%LocalAppData%\Packages\Microsoft.MinecraftUWP_8wekyb3d8bbwe\LocalState\games\com.mojang\minecraftpe\options.txt" 
  3. WIN+R (open the run prompt)
  4. CTRL+V (paste the path)
  5. ENTER (your default editor for .txt files should open e.g. Notepad)
  6. Edit  mp_playername option (white-space and certain characters likely restricted)
  7. CTRL+S (save the file)
  8. Load Minecraft to see the change.

citation:

I didn't find a direct source for this, but some things I read while researching solutions gave me the idea to search within configuration files that might contain a solution. I found options.txt and the change worked.

Wednesday, February 8, 2023

Archiving Smarter Every Day episodes

I wrote up my steps for grabbing online media content (audio and video) from content platforms such as YouTube using the yt-dlp utility. I took the Smarter Every Day channel as an example of important intellectual content and recorded some related tutorials.

You can find it hosted on Coda here: handy-to-know-shizzle/archiving-smarter-every-day-episodes.

Wednesday, February 1, 2023

Bracketed paste - prevents pasting commands into vim

Problem: Why can't I paste commands into vim?

I've had this problem in at least two environments I work in. It came up against just recently, so I'm taking a moment to document it. Let us say you have the following in your clipboard:

:set tabstop=4 shiftwidth=4 expandtab

So you are inside vim and you press your usual paste keystroke for your terminal e.g. SHIFT + INSERT. You expect the command to appear in the vim command line area, but instead something else happens and maybe some of the clipboard content is pasted into the buffer instead?

😡🤬

Impact: frustration and lost productivity...

Everyone hates to lose their flow state because of annoying issues like this. At least from my 20+ years of experience with Linux, it's a non-standard behaviour (or perhaps a change in the old/legacy behaviour).

Solution: 

It is possible that this issue only affects xterm-like terminals. I use mintty heavily in my daily workflows.

This post on the Stack Exchange vim site captures the problem / solution. Its  straightforward - at runtime and/or in your ~/.vimrc use the following:
" disable bracketed-paste - which prevents pasting commands into vim
set t_BE=

It helps to understand bracketed-paste: https://en.wikipedia.org/wiki/Bracketed-paste. In addition it helps to understand the relevant bracketed-paste sections of the vim manual on bracketed-paste.

Citation:

Props to: the people on the Stack Exchange post.

Monday, January 30, 2023

Windows smb share file permissions cache / race condition issue

Problem: windows client cannot access a file on an smb share but the permissions are correct on the server

Client: Windows 10 22H2 (OS Build 19045.2006) smb dialect 3.1.1
Server: Linux - Debian 10 - buster - smbd version 4.9.5

I had an issue that a specific file on an smb share connected via windows client was inexplicably out of sync with the server permissions and ACL's. The file could be listed but read/write permission was denied. Using cygwin to list the file permissions showed disparity between the client and the server.

The file had been written by a Linux client and the Windows client had inexplicable permissions issues. Explorer, and other programs demonstrated the permissions issues. Here is how it looked like from a cygwin prompt on the windows client:

user@node-5900x //omv.blah.local/share
$ file merge.mp4 ; touch merge.mp4
merge.mp4: regular file, no read permission
touch: cannot touch 'merge.mp4': Permission denied

Cross-checking the permissions and ACL's on the server and another Linux client - everything seemed fine. Explicitly touching, chown and chmod'ing the file didn't help to wake up the windows client to see the correct permissions. Restarting smbd on the server also didn't seem to help.

Creating more new files on the Linux client and checking them on the Windows client - everything was OK... It was this specific file that was having issues.

Not sure if its related but the command that created the file (the writing binary) on the Linux node was as follows:

ffmpeg -i merge.mkv -strict experimental -c copy merge.mp4

Impact: client unable to work with the file

You could say this was a kind of service outage for the client. This would obviously impact the productivity of the person(s) working on the client.

Solution: restart the workstation service on client

The smb connection was not listed with net use, so net use was not the right approach to delete the session/connection for the share in this case. I found a few posts suggesting that a restart of the clients workstation service would clear out sessions/credentials and could solve issues - it did. I have a feeling I've used this approach in the past - it was just too long ago to remember it.

Prior to restarting the service the smb connections list looked like this via elevated PowerShell:

PS C:\WINDOWS\system32> Get-SmbConnection

ServerName ShareName UserName Credential Dialect NumOpens
---------- --------- -------- ---------- ------- --------
omv.blah.local share NODE-5900X\user NODE-5900X\user 3.1.1 10

To restart the workstation service - from an elevated cmd prompt:

net stop workstation && net start workstation

💡 Its important to ensure all explorer and other programs using the share are closed, otherwise this solution might not work as advertised.

Alternative: logging off the windows user and/or restarting the windows node would likely of also resolved this issue. However those approaches are disruptive and sometimes highly undesirable because they can impact peoples workflow and productivity.

Thursday, January 26, 2023

awk - multiple iterations on a single file with different logic (FS) per iteration

Holy smokin’ Toledo’s - what a blast from the past! I was doing some blog spring cleaning and found this unpublished draft from 2012-May-02. Here is the unedited awk paste (scroll down for a commented version):

awk -v loops=2 '
BEGIN {
f = ARGV[ARGC-1]
"wc -l "f"|egrep -o [0-9]+" | getline NL;
while (++i < loops) {
ARGV[ARGC++] = f
}
}
FNR == 1 {
iteration++; print "iteration: "iteration
}
FNR == NL {
FS = "[0-9][0-9]:[0-9][0-9] "
}
iteration == 1 { print $1 }
iteration == 2 { print $NF }

' $media_files_file

In 2012 I must have been happy with my handy work - because I had created a blog draft either as a scratch pad or because I wanted to share it...😊

So I searched my script library for the keyword media_files_file and got a hit on a script named update-dm-to-dc.sh which was created to batch update the date modified timestamp of files to match the date created timestamp.

This was also back in a time when I was probably storing media files on NTFS which stores creation timestamps for its files.

I'm guessing that I probably did something to change the modified timestamp on a bunch of media files and wanted to revert that change. Back in 2012, my storage did not support CoW or a file system that supported snapshots - so there was no easy way to roll back if the mass modification of files went wrong - backups would have been the main undo workflow.

Perhaps iTunes or similar media library had updated file modified timestamps in an undesirable way. E.g. causing differential backup scripts to see the media files as being modified and being selected during the next backup? 

Maybe I was migrating files to a filesystem that didn't support creation timestamps (like XFS v4 or ext3) and wanted to set the modified stamp to the source filesystems creation stamp?

<rabbit-hole>


Q: Did XFS support creation time (crtime) in 2012?
A: No - In 2012 the latest XFS release was v4 - the crtime code (XFS inode v3) was first committed in 2013-Apr-21 by Christoph Hellwig. Here is the commit.
An XFS status update in 2013-May mentions the the release of Linux 3.10 with an "experimental XFS feature - CRC protection for on-disk metadata structures", which AFAIK was part of the inode v3 code.
In 2015-May xfsprogs-3.2.3-rc1 was released which mentioned properly supporting inode v3 format.
The XFS docs were updated in 2016-Jan with details of crtime and XFS v5 fields. Note the differentiation between XFS version and XFS inode version.

Useful related links:
#1 How to find creation date of file?
#2 What file systems on Linux store the creation time? - XFS v5 supports crtime

# on XFS v5
# xfs_db -r -c "inode $(stat -c '%i' yourfile.txt)" -c "print v3.crtime.sec" /dev/disk/by-uuid/1d5722e2-a5ac-XXXX-XXXX-392290480c23
v3.crtime.sec = Sun Aug 22 14:58:40 2021 

It is noted that stat -c '%w' or '%W' should display file creation time on filesystems that store creation time. Noting that on Linux this requires coreutils 8.31 (released 2019-Mar), glibc 2.28 and kernel version 4.11 or newer.

</rabbit-hole>

I've gone through the awk script and added comments - it took me a moment to figure out what I was trying to do back then... Unfortunately I don't have bash history going back to that exact point in time to know the exact contents or line format of $media_files_file.

What I was able to establish is that I probably ran this ad hoc awk script the day before I created the update-dm-to-dc.sh script (the research and debugging phase). So the line format of $media_files_file was probably an earlier iteration of the formats used in the update-dm-to-dc.sh script - the record formats in the script don't match the pattern: FS = "[0-9][0-9]:[0-9][0-9] "

My summary of the ad-hoc awk script: I was trying to run awk once but read the records (lines) more than once and do something different with the the records based on the iteration.

For each record (line in this case - default RS) in the $media_files_file the ad-hoc awk would of:

  1. Once at the start of each iteration - printed the iteration number.
  2. For the #1 iteration - for each record - print the first field (default FS).
  3. For the #2 iteration - for each record - print the number of fields (custom FS).

My hypothesis of the goal of the ad-hoc awk - was to help validate records (lines) and fields - to ensure that the fields were normalised and predictable for script input, with the goal of doing a mass update/change of timestamps of my media library. So this awk was part of the manual assertions to check that the planned script input would work in a predicable way. With comments:

awk -v loops=2 '
# this block is run once at the start of awk invocation
BEGIN {
# store the first awk argument in var f - the file to process in this case
f = ARGV[ARGC-1]

# store the line count of f var in NL var (number of lines in the file to be processed).
# awk does not have a built in variable for this.
"wc -l "f"|egrep -o [0-9]+" | getline NL;

# duplicate the command line argument to to satisfy the number of specified loops.
# this has the effect of telling awk to run more than once on the input file stored in f var.
while (++i < loops) {
ARGV[ARGC++] = f
}
}

# run the following code for each iteration

# this block is executed at the start of each iteration in f var (first line of the file)
FNR == 1 {
# increment and print the iteration counter
iteration++;
print "iteration: "iteration
}

# this block is executed at the end of an iteration in f var (last line of the file)
FNR == NL {
# modify the awk FS (Field Separator)
FS = "[0-9][0-9]:[0-9][0-9] "
}

# this block is executed only during iteration 1
iteration == 1 {
# print the first field in the current input record
print $1
}

# this block is executed only during iteration 2
iteration == 2 {
# print the NF (Number of Fields) in the current input record
print $NF
}

' $media_files_file

Overall thoughts

The script is pretty nifty - in retrospect there may have been better ways to achieve the same results but I like how it explores the possibilities of awk, demonstrating a practical modification of ARGV and how to run different logic on the same input records. Nice work 2012 me!