sahinakkaya.dev/assets/js/lunr/lunr-store.js
2023-01-17 22:55:12 +00:00

68 lines
42 KiB
JavaScript
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

var store = [{
"title": "First blog post",
"excerpt":" Hello, World!* So here I am and welcome to my first blog. Having a personal space on the Internet has been a dream for me for years and I am happy that it finally have come true. You might think that I could sign-up for a social media platform and my profile would be a personal space for me but no. I just dont feel comfortable with that way. This has been the case since my childhood and also the reason why I dont use Facebook, Instagram or any other social media. If you think you found me on these platforms, I would say it is not me. I might write another post about why I dont like social media but I will cut this one here. Why I wanted to start blogging? There are several reasons for starting my own site and blogging, but I can list the most important ones as follows: Giving back to community I use the software developed and brought by the community every day. The moment I power on my computer I start using Free Software. It really amazes me to see the work produced by people who do not know each other at all. For example, I did not even write a single line of code for this site. If Free Software didnt exist, Id either have to spend money and use a platform that I have limited control over, or waste my time and build a site with a possibly worse design than this one*. In return for this, I want to give back to the community. For me, the way to give back to the community so far has been to share the projects Ive done and archive the things I learn every day in a repository called TIL*. But some of the tils Ive written recently are getting lengthy and I think they deserve their own posts. So instead of writing long tils, I will blog what I learned here. Archiving the memories I like to go over what I have done in the past once in a while. Blogging is perfect way to do this. I still read my diaries that I wrote in the past and they are fun. But I promise I will keep these posts more formal than my diaries*. Pushing myself to do something useful At the end of every year, I sit on my desk and think about what I did in that year. I generally dont like the result because I fail to keep some of my resolutions for that year. Setting up a personal website was one of my resolutions for 2021 and it looks like I manage to keep it**. Unfortunately, I cant always keep my spirits up. Sometimes I just do nothing and all the time passes. Hopefully, the feeling that I have to write something will help me get out of bad mood at such times. Improving my writing skills Last but not least, I want to improve my writing. Even though I dont use a formal language while writing here, I think it will help me improve my writing skills. Final words While writing this post I already come up with some new topics to write but I think they need their own posts. Subscribe to my RSS Feed to not miss them. You know RSS, right? I recently started using it and it is the best way to consume content. Do yourself a favor and search it if you dont know. I will probably write something about it in the following blog posts. Thats all from me and thank you for reading. See you next time! ","categories": [],
"tags": [],
"url": "/2021/12/24/first-blog-post.html",
"teaser": null
},{
"title": "Stop cat-pipe'ing, You Are Doing It Wrong!",
"excerpt":"cat some_file | grep some_pattern Im sure that you run a command something like above at least once if you are using terminal. You know how cat and grep works and you also know what pipe (|) does. So you naturally combine all of these to make the job done. I was also doing it this way. What I didnt know is that grep already accepts file as an argument. So the above command could be rewritten as: grep some_pattern some_file … which can make you save a few keystrokes and a few nanoseconds of CPU cycles. Phew! Not a big deal if you are not working files that contains GBs of data, right? I agree but you should still use the latter command because it will help you solve some other problems better. Here is a real life scenario: You want to search for some specific pattern in all the files in a directory. If you use the first approach, you may end up running commands like this: ls  config.lua  Git.lua  init.lua  markdown.lua  palette.lua  util.lua  diff.lua  highlights.lua  LSP.lua  Notify.lua  Treesitter.lua  Whichkey.lua cat config.lua | grep light cat diff.lua | grep light cat Git.lua | grep light cat highlights.lua | grep light Pmenu = { fg = C.light_gray, bg = C.popup_back }, CursorLineNr = { fg = C.light_gray, style = \"bold\" }, Search = { fg = C.light_gray, bg = C.search_blue }, IncSearch = { fg = C.light_gray, bg = C.search_blue }, cat init.lua | grep light local highlights = require \"onedarker.highlights\" highlights, # You still have a lot to do :/ If you use the second approach, you will immediately realize that you can send all the files with * operator and you will finish the job with just one command (2 if you include mandatory ls :D): ls  config.lua  Git.lua  init.lua  markdown.lua  palette.lua  util.lua  diff.lua  highlights.lua  LSP.lua  Notify.lua  Treesitter.lua  Whichkey.lua grep light * highlights.lua: Pmenu = { fg = C.light_gray, bg = C.popup_back }, highlights.lua: CursorLineNr = { fg = C.light_gray, style = \"bold\" }, highlights.lua: Search = { fg = C.light_gray, bg = C.search_blue }, highlights.lua: IncSearch = { fg = C.light_gray, bg = C.search_blue }, init.lua:local highlights = require \"onedarker.highlights\" init.lua: highlights, LSP.lua: NvimTreeNormal = { fg = C.light_gray, bg = C.alt_bg }, LSP.lua: LirFloatNormal = { fg = C.light_gray, bg = C.alt_bg }, markdown.lua: markdownIdDelimiter = { fg = C.light_gray }, markdown.lua: markdownLinkDelimiter = { fg = C.light_gray }, palette.lua: light_gray = \"#abb2bf\", palette.lua: light_red = \"#be5046\", util.lua:local function highlight(group, properties) util.lua: \"highlight\", util.lua: highlight(group, properties) Isnt this neat? You might say that “This is cheating! You are using a wild card, of course it will be easier.” Well, yes. Technically I could use the same wild card in the first command like cat * | grep light but: I figured that out only after using wild card in the second command. So I think it is does not feel natural. It is still not giving the same output. Try and see the difference! * ","categories": [],
"tags": ["cat","grep","linux","command-line"],
"url": "/2022/01/01/stop-cat-pipeing.html",
"teaser": null
},{
"title": "Automatically Build and Deploy Your Site using GitHub Actions and Webhooks",
"excerpt":"In this post I will explain how you can use GitHub to automate the build and deployment processes that you have. I am going to automate the deployment of this site but you can do whatever you want. Just understanding the basics will be enough. Introduction to GitHub Actions and Webhooks Let me start by explaining what are GitHub Actions and GitHub Webhooks. Github Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production. Webhooks provide a way for notifications to be delivered to an external web server whenever certain actions occur on a repository or organization. … For example, you can configure a webhook to execute whenever: A repository is pushed to A pull request is opened A GitHub Pages site is built A new member is added to a team Defining the problem and solution As I said, my example will be automating the deployment of this site. Here is the normal workflow of me doing it manually: As you can see, the only place where my work is really required is writing the post. Other two steps can be automated. We will use GitHub Actions to generate the site content and Webhooks to let our server know about the new content so it can pull the changes. Lets get started. Setting up GitHub Actions Setting up a GitHub Action is as easy as creating a .yml file in .github/workflows/ directory in your repository. Let us create a new action to build our site. Fortunately, there is already a GitHub action to do it for us. Create a file called .github/workflows/jekyll.yml in your root directory of your repository and put the following contents: name: Jekyll site CI on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Jekyll Actions uses: helaili/jekyll-action@2.2.0 with: token: ${{ secrets.GITHUB_TOKEN }} keep_history: true target_branch: 'gh-pages' Thats it! We have created our first Action. When we push this change, GitHub will start building our site and push the result to gh-pages branch. Currently, it will take a while to build because we dont use caching. So lets include it to build faster. Add the following piece as a second step: # Use GitHub Actions' cache to shorten build times and decrease load on servers - uses: actions/cache@v2 with: path: vendor/bundle key: ${{ runner.os }}-gems-${{ hashFiles('**/Gemfile') }} restore-keys: | ${{ runner.os }}-gems- We are done with the Actions part. You can see the final code here. When you are also done with the code, just push it to trigger the action. Setting up the Webhook and related endpoint Now that we set up our Action to build the site, we need to let our server know about the changes so that it can pull the changes. Creating a Webhook from GitHub To add a Webhook, open your repository in browser and navigate to Settings > Webhooks and click Add Webhook. Fill in the form with appropriate values. Here is an example: This is all you have to do from GitHub. Now, whenever there is a push event to your repository, GitHub will send a POST request to your payload url with the details. Note: Our Action is configured to push to a branch in our repository, so it will also trigger this hook and we will catch it. Creating an endpoint to handle the requests I will use Flask framework to handle the post requests coming to our endpoint. You can use whatever programming language or framework you want. It will be very simple code with just one job: Validate the secret keys and run a specific code. Lets start by creating a new project and a virtual environment: mdkir post_receiver cd post_receiver python3 -m venv venv source venv/bin/activate Install the required packages: pip install Flask gunicorn Create a new file for storing our environment variables: # config.py APP_KEY = \"your-secret-key\" # same key that is used in github while creating the webhook PROJECT_PATH = \"/path/to/your/project/\" # you will want to cd into this path and perform commands such as git pull etc. And create the Flask application: # post_receiver.py import hashlib import hmac import subprocess from flask import Flask, request import config application = Flask(__name__) @application.route('/', methods=['GET', 'POST']) def index(): if request.method == 'GET': return 'OK' elif request.method == 'POST': content = request.data secret = bytes(config.APP_KEY, 'utf-8') digester = hmac.new(secret, content, hashlib.sha256) calculated_signature = 'sha256=' + digester.hexdigest() actual_signature = request.headers.get('X-Hub-Signature-256') if calculated_signature == actual_signature: subprocess.Popen( ['./perform-git-pull.sh', config.PROJECT_PATH]) return 'OK' else: return 'Error' if __name__ == \"__main__\": application.run(host='0.0.0.0') I will not go into details explaining what each line does. Basically, we are checking if the request is a POST request and if so we are comparing the secret keys to make sure that the request is coming from GitHub. In our case, this is not too important because when the keys match we are running simple git commands in our repository but you might need it if you are doing something more complicated. And here is the contents of perform-git-pull.sh file: #!/bin/bash cd $1 git checkout gh-pages git pull We are almost done! All we need to do is create a service to automatically run our code and let nginx handle our endpoint correctly. Create a new file post_receiver.service in /etc/systemd/system/: #/etc/systemd/system/post_receiver.service # change <user> to your actual username [Unit] Description=post_receiver After=network.target multi-user.target [Service] User=<user> Environment=\"PYTHONPATH=/home/<user>/post_receiver/venv/bin/python\" WorkingDirectory=/home/<user>/post_receiver ExecStart=/home/<user>/post_receiver/venv/bin/gunicorn -b 127.0.0.1:5000 -w 2 --log-file /home/<user>/post_receiver/post_receiver.log post_receiver [Install] WantedBy=multi-user.target Make sure port 5000 is reachable from outside. sudo ufw allow 5000 sudo ufw enable Finally, edit your nginx configuration, /etc/nginx/sites-available/yoursite location = /postreceive/ { proxy_pass http://localhost:5000/; } Start, restart the services sudo systemctl daemon-reload sudo systemctl start post_receiver sudo systemctl enable post_receiver sudo systemctl restart nginx Thats it! curl https://yourdomain.com/postreceive/ should return \"OK\" and we are ready to accept POST requests from GitHub. Notes for debugging In case anything goes wrong, here are a few tips to debug: Every GitHub Action produces a log that you can examine. Check them to see if anything is odd. In the Webhooks tab, there is a sub-tab called Recent Deliveries. You can take a look at there to see the results of the requests from your hooks. You can always test your code locally with curl: curl -i -X POST -H 'Content-Type: application/json' -d '{\"foo\": \"bar\", \"bar\": \"baz\"}' https://yourdomain.com/postreceive/ Happy hacking! ","categories": [],
"tags": ["github-actions","github-webhooks","ci-cd"],
"url": "/2022/01/04/build-and-deploy-automatically.html",
"teaser": null
},{
"title": "Using ffmpeg for Simple Video Editing",
"excerpt":"Story Today, I have recorded a video for one of my classes and I was required to upload it till midnight. The video was perfect except for a few seconds where I misspelled some words and started again. I had to remove that part from the video before uploading it. Since I was low on time, I thought that I better use a GUI program to do this job. I opened up Kdenlive and jumped into editing my video. It was my first time using it so I spent some time to cut and delete the parts that I want to get rid of. When I was ready, I clicked Render button to render my video. It was waaay too slow than I expected. Since I have nothing to do while waiting for render to finish, I thought I could give ffmpeg a shot. Let the show begin Like Kdenlive, I have never used ffmpeg before. Like every normal Linux user do, I opened up a terminal and typed man ffmpeg to learn how to use it… Just kidding :D I opened a browser and typed “ffmpeg cut video by time”. Not the best search query, but it was good enough to find what I am looking for as the first result. Cutting the videos based on start and end time According to answers on the page I mentioned, I run the following commands to cut my video into two parts: ffmpeg -ss 00:00:00 -to 00:01:55 -i input.mov -c copy part1.mp4 # take from 00:00 to 01:55 ffmpeg -ss 00:02:03 -to 00:05:17 -i input.mov -c copy part2.mp4 # take from 02:03 to 05:17 These two commands run instantly! Kdenlive was still rendering… The progress was 46%. Meh… I said “Duck it, I am gonna use ffmpeg only” and cancelled the rendering. Concatenating the video files Now we have two videos that we want to join. Guess what will be our next search query? “ffmpeg join videos”. And here is the first result: echo file part1.mp4 >> mylist.txt echo file part2.mp4 >> mylist.txt ffmpeg -f concat -i mylist.txt -c copy result.mp4 And we are DONE! How easy was that? Whole process took about 10 minutes including my search on the internet. If I continued waiting for Kdenlive to finish rendering, I would probably be still waiting at that time. I love the power of command line! ","categories": [],
"tags": ["cli","ffmpeg"],
"url": "/2022/01/21/ffmpeg-to-rescue.html",
"teaser": null
},{
"title": "SSH into Machine That Is Behind a Private Network",
"excerpt":"Story I believe there is always a “tech support person” in every home. Everyone knows that when there is a problem with any electronic device, they should ask this person. I am the tech support in our house. Today, I had to fix a problem in our desktop. Since I was not at home, I had to fix the problem remotely. Possible solutions Just tell the non-tech people at home to configure the router to forward ssh traffic to desktop, right? Well, this is not an option for me, not because people are non-tech, but there is no router! The desktop is connected to internet via hotspot from mobile phone. There is no root access in the phone and even if there was, it is a really big pain to forward the packets manually. Trust me. Been there, done that! There are tools like ngrok, localtunnel which exposes your localhost to the internet and gives you a URL to access it but I did not want to use them. I did not want to use ngrok because it is not open source and it might have security issues. They are also charging you. localtunnel seemed perfect. The code of both client and server is open. That is great news! But it did not last long because it is just forwarding http/https traffic :( Solution I was thinking of extending the functionality of localtunnel, but I learned a very simple way. You dont need any external program to overcome this issue. The good old ssh can do that! All you need is another machine (a remote server) that both computers can access via ssh. # local machine (my home computer) ssh -R 7777:localhost:22 remote-user@remote.host This command forwards all the incoming connections to port 7777 of remote machine to port 22 of our current machine. In order for this to work, you need to make sure GatewayPorts is set to yes in the remote server ssh configuration. It also assumes our current machine accepts ssh connections via port 22. Now, go to any machine and connect to the remote server first. When we are connected, we will create another ssh connection to port 7777 to connect our home computer. # another local machine (my laptop) ssh remote-user@remote.host # connected remote ssh -p 7777 homeuser@localhost # we are now connected to home computer The last two command can also combined so that we directly hop into the home computer. ssh -t remote-user@remote.host ssh -p 7777 homeuser@localhost Result As a result, it only took us 2 simple ssh commands to do this. This is just unbelievable! Now, I need to find a way to make non-tech people at home run this command when there is a problem. Too bad Linux cant help me there :D ","categories": [],
"tags": ["ssh","private-network","remote-port-forwarding"],
"url": "/2022/02/26/ssh-into-machine-that-is-behind-private-network.html",
"teaser": null
},{
"title": "Creating a *Useless* User",
"excerpt":"Story In my previous post, I explained how to do port forwarding to access some machine behind private network. I will use this method to fix some issues in our desktop at home or my girlfriends computer. Now, of course I dont want to give them access to my server. But they also need to have a user in my server to be able to perform port forwarding via ssh. So I wanted to create a user with least privileges to make sure nothing goes wrong. The solution I searched the problem in it turned out to be very simple. You just need to add two additional flags to adduser command while creating the user. sudo adduser uselessuser --shell=/bin/false --no-create-home Now, uselessuser cant do anything useful in your server. If they try to login, the connection will be closed immediately. ssh uselessuser@remote.host uselessuser@remote.host\\'s password: Could not chdir to home directory /home/uselessuser: No such file or directory Connection to remote.host closed. But they can still do forward the remote port to their local machine. ssh -Nf -R 7777:localhost:22 uselessuser@remote.host uselessuser@remote.host\\'s password: The -N option is the most important one here. From the documentation: -N Do not execute a remote command. This is useful for just forwarding ports. Refer to the description of SessionType in ssh_config(5) for details. Last words I love learning new things everyday. I knew setting the shell of a user to /bin/false will prevent them from logging in. The reason I wrote this blog post is because 2 things I wanted to share: While looking for a solution to the problem I mentioned, I searched “create a user with no privileges in linux” and this came out. It is really interesting for me that another person wanted to do the same thing for the exact same reasons. They were also trying port forwarding via ssh and they wanted to create a limited user in their server to give friends. So the question was a perfect fit to the problem. The -N flag of the ssh command was also surprising for me. It was like as if someone had encountered these problems before and just took the exact steps required to solve this problem for me. I mean look at the documentation. Crazy! ","categories": [],
"tags": ["linux","permissions","privileges"],
"url": "/2022/02/27/creating-a-useless-user.html",
"teaser": null
},{
"title": "Never Get Trapped in Grub Rescue Again!",
"excerpt":"Anytime I install a new system on my machine, I pray God for nothing bad happens. But it usually happens. When I reboot, I find myself in the “Grub rescue” menu and I dont know how to fix things from that point. I go into the live environment and run some random commands that I found on the internet and hope for the best. What a nice way to shoot myself in the foot! But this time is different. This time, I f*cked up so much that even the random commands on the internet could not help me. I was on my own and I needed to figure out what is wrong with my system. Let me tell you what I did: I decided to install another OS just to try it in a real machine. I wanted to shrink one of my partitions to create a space for the new system. I run fdisk /dev/sdb, the very first message that it tells me was This disk is currently in use - repartitioning is probably a bad idea. It's recommended to umount all file systems, and swapoff all swap partitions on this disk. Yes, it just screams “Do not do it!” but come on. I will not try to shrink the partition I am using (sdb3). So it should not be a problem. I ignored the message and shrink it anyway. No problem. Installed and tested the new OS a little bit. Time to reboot and hope for the best. And of course it did not boot. What would I even expecting? As always, I booted into a live environment and run boot-repair command. It was always working but this time… Even after finishing the operation successfully I could not boot into neither Arch nor Ubuntu (the two systems I had previously). Arch was originally mounted in sdb3 and Ubuntu was in sda2. Considering the fact that I only messed with sdb, I should be able to boot Ubuntu, right? Well, yeah. Technically I did boot into Ubuntu but I didnt see the login screen. It was dropping me into something called “Emergency mode” which just makes me panic! sudo update-grub… Nope. Nothing changes. Arch does not boot and Ubuntu partially boots. Let me tell you what the problem was and how my ignorance made it worse: While installing the new system, I saw a partition labelled “Microsoft Basic Data”. I deleted it thinking it is not required because I dont use W*ndows. It turns out, it was my boot partition for Arch, just labelled incorrectly… Big lolz :D But we will see this is not even important because I had to rewrite my boot partition anyway. My Arch was installed in sdb3. When I created a new partition and installed the new system, sdb3 was shifted to sdb5 even though I did not ask for it. But the grub configuration to boot my system was still pointing to sdb3. That was the reason why Arch does not boot. It was trying to boot from sdb3. So I had to recreate grub configuration and reinstall grub to fix it. I run the following commands that I found here in a live Arch environment: mkdir /mnt/arch mount -t auto /dev/sdb5 /mnt/arch arch-chroot /mnt/arch mount -t auto /dev/sdb4 /boot/efi os-prober grub-mkconfig > /boot/grub/grub.cfg grub-install --efi-directory=/boot/efi --target=x86_64-efi /dev/sdb exit reboot And it fixed my grub. I can now boot into Arch, hooray! Ubuntu was not still booting properly. I checked the logs with journalctl -xb and saw something related with sdb. Ubuntu was installed in sda2, why sdb should be a problem? Then I remembered something. Back in times when I was using Ubuntu, I was using sdb1 as a secondary storage. So I had a configuration where it automatically mounts sdb1 on startup. Since I messed with sdb1 , it was failing to mount it. I opened /etc/fstab, and deleted the related line. Bingo! It started booting properly. I started feeling like Hackerman, and I said to myself “You know what, Imma fix everything.” I had a very sh*tty grub menu with useless grub entries from old systems that I dont use anymore. The UEFI also had the same problem. It had ridiculous amount of boot entries that most of them are just trash. These are the pictures I took for reference while trying to figure out which boot options are useless. Sorry for the bad quality. I didnt think I would use them in a blog post. While trying to fix the previous problems, Ive spent enough time in the /boot/efi directory that make me understand where these grub entries are coming from. There were a lot of files belong to old systems. I simply deleted them and updated grub. All of the bad entries were gone. I want to draw your attention here: I did not search for how to delete the unused grub entries. I just knew deleting their directories from /boot/efi will do the job. I am doing this sh*t! (Another hackerman moment :D ) In order to delete useless boot options from UEFI menu, I used efibootmgr. I searched for it on the internet, of course! efibootmgr -v # Check which entries you want to delete, say it is 0003. sudo efibootmgr -b 0003 -B # This will delete third boot option. And finally! I know everything about how all these work. Another shady part of Linux is clear for me. Now: ","categories": [],
"tags": ["linux","grub","partition","uefi"],
"url": "/2022/03/03/never-get-trapped-in-grub-rescue-again.html",
"teaser": null
},{
"title": "Confession Time",
"excerpt":"A failure story Last week, I received an email from Lets Encrypt reminding me to renew my certificates. I forgot to renew it and the certificate expired. Now I cant send or receive any emails. If you send me email in the last week and wonder why I didnt respond, this is the reason. Anyway, I thought it will be easy to fix. Just run certbot again and let him do the job, right? NOPE. It is not that easy. It is just giving me errors with some success messages. If I was not so clueless about what the heck I am doing, I could fix the error. But I dont know anything about how SSL works and it is a shame. I dont even know the subject enough to Google it. I feel like I am the only guy in the planet whose certificate is expired. Seriously, how tf I cant find a solution to a such common problem? There was a saying like, “If you cant find something on the internet, there is a high chance that you are being stupid”. It was not exactly like this but I cant find the original quote either. Argghh… If you know the original quote, email me… No, do not email because it does not work. F%ck this thing. F*%k everything. I deserved this. Do not help. If I cant fix this by myself, I should not call myself computer engineer. I am out. Update The problem is fixed. One of my colleagues told me to reboot the server so that it will (possibly) trigger a script to get a new certificate. I did not think it would work because I already try to get a new certificate manually running certbot renew. And yeah, it didnt change anything but gave me courage to try other dead simple solutions. One of them was adding missing MX records for my domain. certbot was telling me that it cant find any A or AAAA records for www.mail. I didnt think this is related with my problem because how would I receive emails before then? Anyway, I added the records and the errors are gone. It was only giving me success messages now. Everything seemed to be fine. But I still could not connect to my mail account. And here is the solution: sudo systemctl restart dovecot. Kill me. I am guessing I had to restart the mail service because certificate has changed and it had to pick up the new one. I bet if I had run this command right after certbot renew I would not face any issues. The error messages caused by missing mx records were not related with this problem but I was confused by them and I thought something wrong with my certificates. Any way, I am happy that it is finally fixed. Did I learn something from this? Not much. But yeah, sometimes all you need is a simple restart :D ","categories": [],
"tags": ["ssl"],
"url": "/2022/04/08/confession-time.html",
"teaser": null
},{
"title": "Rant: Stop whatever you are doing and learn how licenses work",
"excerpt":"Recently, Github announced that they are making Github Copilot available for everyone. Previously, it was in Beta and you could get it through the waiting list. When I saw the news, I thought I can give it a try. But not so surprisingly it was not free. You have 3 ways to get it: Pay the subscription fee and get it. Prove you are a student and get it for free. Be a maintainer of a popular repository and get it for free. I think I should be able to use it for free because I am a student but apparently they are not convinced yet. Anyways, that is a different story. I dont care if they will give me access to Github Copilot or not. It is not a big deal for me. But some people were really angry about how Github Team being vague while defining the criteria as “being a maintainer of a popular open source project”. I think they are right to some extent. If all you need is having a few thousands stars for a project, you could easily get that. I know a lot of troll or low effort repositories that get a lot of stars because they are funny. Later, I found another tweet that explains how Github decides what is popular. According to this tweet, if you have a repository that is in top 1000 in one of the most popular 34 languages, you are eligible to get Github Copilot for free. This is better than the previous definition but you can still argue that it is not fair because one can create a package for checking if a number is even or not and get thousands of stars. You can criticize this, I get that. But do not come up with silly arguments to justify yourself. Like how on earth would you think that Github is doing something bad because $10/month is too much for this service? It is business man, you pay if you think it is worth it. Thats it. “I joined beta program and it was free, now they want to charge me if I want to continue using it. They did not tell me that.” Uhhm… What? Are you aware that what you are using is another companys service and they have all rights to do whatever they want with their service? How you guys even can build up arguments like that?! This is crazy! Some people argue that “what Github is doing is wrong because they used open source projects without consent.” Another similar argument is that “what Github is doing is evil because they used projects developed by community and now they are selling it without giving any money to the contributors of these projects.” Do you guys even have an idea what licenses stands for? If you dont want to some random person use your code, just license it that way. And if you licensed it with a GPL compatible or similar license you already gave rights anyone to use or sell your code. That is not Githubs problem. That is your problem not understanding how licenses work. Stop complaining. ","categories": [],
"tags": ["copilot","license","github"],
"url": "/2022/06/22/rant-on-peoples-reaction-to-copilot.html",
"teaser": null
},{
"title": "Recap of 2022",
"excerpt":"Its been a while… It has been so long that I forgot how I was writing my blogs back then. My life didnt change that much. Actually, it is getting worse. The biggest problem of my life is the graduation project. Oh, God it is making me sick! I simply dont have any interest for the subject I am supposed to work on. One part of me saying that “come on, you came this far. You are nearly finished. One last push!” and other part of me saying “oh no, dont do it. You have never done something you dont like in your entire life. F*ck it!”. So I am wasting my time each term with the dilemma I just described. I really dont know what to do. This thing is fed up. Second biggest problem is I live in Turkey. I feel like all my friends somehow get rid of this sh*thole and I am locked here. I use Twitter and Reddit to consume daily news and almost everyday I encounter something that make me say “F*ck me, why I am still here? There is no hope”. Actually, the situation was much worse while I was following pages that shares “street interviews”. At first I started watching them for fun but the stupidity of people was real and harming my mental health. Since that day, I started consuming only news. My experience got better but I feel like it is still affecting me in a bad manner because everyday something bad happens and there is not much I can do to fix. Today, I decided to delete Twitter and Reddit. Ill see how it goes. I am living with my parents for the past 6 months, I break up with my girlfriend, I left the place I was working. Man, this could be the worst year of my life! You know what? I am not gonna give up. “… It aint about how hard you get hit. Its about how hard you get hit and keep moving forward. How much you can take, and keep moving forward…” No, seriously things really will be different for me in 2023 I can feel it. I learn from my mistakes, they are making me even more perfect :D I love myself, I got this. ","categories": [],
"tags": [],
"url": "/2022/12/29/recap-of-2022.html",
"teaser": null
},{
"title": "Hot-Reload Long Running Shell Scripts (feat. trap / kill)",
"excerpt":"trap them and kill them! There is a beautiful command in Linux called trap which traps signals and let you run specific commands when they invoked. There is also good ol kill command which not only kills processes but allows you to specify a signal to send. By combining these two, you can run specific functions from your scripts any time! Basic Example Lets start by creating something very simple and build up from there. Create a script with the following contents: #!/bin/bash echo \"My pid is $$. Send me SIGUSR1!\" func() { echo \"Got SIGUSR1\" } # here we are telling that run 'func' when USR1 signal is # received. You can run anything. Combine commands with ; etc. trap \"func\" USR1 # The while loop is important here otherwise our script will exit # before we manage to get a chance to send a signal. while true ; do echo \"waiting SIGUSR1\" sleep 1 done Now make it executable and run it: chmod +x trap_example ./trap_example My pid is 2811137. Send me SIGUSR1! waiting SIGUSR1 waiting SIGUSR1 waiting SIGUSR1 waiting SIGUSR1 waiting SIGUSR1 Open another terminal and send your signal with kill to the specified pid. kill -s USR1 2811137 You should receive \"Got SIGUSR!\" from the other process. Thats it! Now, imagine you write whatever thing you want to execute in func and then you can simply kill -s ... anytime and as many times you want! Lets move the while loop into the func and add some variables so you can see how powerful this is. #!/bin/bash echo \"My pid is $$. Send me SIGUSR1!\" func() { i=1 while true ; do echo \"i: $i\" i=$(( i + 1 )) sleep 1 done } trap \"echo 'Got SIGUSR1!'; func\" USR1 # we need to call the function once, otherwise script # will exit before we manage to send a signal func Now run the script and send SIGUSR1. Here is the result: ./trap_example My pid is 2880704. Send me SIGUSR1! i: 1 i: 2 i: 3 i: 4 i: 5 i: 6 i: 7 Got SIGUSR1! i: 1 i: 2 i: 3 i: 4 i: 5 Got SIGUSR1! i: 1 i: 2 ^C Isnt this neat? More useful example Lets imagine you have multiple long running (infinite loops basically) scripts and you want to restart them without manually searching for their pids and killing them. trap is for the rescue, again! * This command is awesome. Without further ado, lets get started. Create a script called script1 with the following contents: #!/bin/bash # file: script1 i=1 while true ; do echo \"Hello from $0. i is $i\" i=$(( i + 1 )) sleep 1 done And symlink it to another name just for fun: chmod +x script1 ln -s script1 script2 Now we can pretend they are two different scripts as their outputs differ: ./script1 Hello from ./script1. i is 1 Hello from ./script1. i is 2 Hello from ./script1. i is 3 Hello from ./script1. i is 4 ^C ./script2 Hello from ./script2. i is 1 Hello from ./script2. i is 2 Hello from ./script2. i is 3 ^C Finally, create the main script which will start child scripts and restart them on our signals: #!/bin/bash echo \"My pid is $$. You know what to do ( ͡° ͜ʖ ͡°)\" echo \"You can also kill me with 'kill -s INT -\\`pgrep -f `basename $0`\\`'\" pids=() # we will store the pid's of child scripts here scripts_to_be_executed=(\"./script1\" \"./script2\") kill_childs(){ # wow, this sounded wild for pid in \"${pids[@]}\" do echo killing \"$pid\" # -P: kill all the processes whose parent process is 'pid' # see how we are creating processes below pkill -P \"$pid\" done pids=() } # kill childs and restart all the scripts restart_scripts(){ kill_childs # for each script in the list for script in \"${scripts_to_be_executed[@]}\" do # Run the script and store its pid. # note the '&' at the end of command. Without it the script will # block until its execution is finished. Also we are putting it # into braces because we want to create a \"process group\" so that # we can kill all its children later by specifying parent pid # (useful if you have pipes (|) or other &'s in your script!) ($script) & pids+=(\"$!\") done } # we will restart_scripts with SIGUSR1 signal trap 'echo \"restarting scripts\"; restart_scripts' USR1 # we will kill all the childs and exit the main script with SIGINT # which is same signal as when you press <Control-C> on your terminal trap 'echo exiting; kill_childs; exit' INT # run the function once restart_scripts # infinite loop, otherwise main script will exit before we send signal. # remember, we started child processes with '&' so they won't block this script while true; do sleep 1 done Now, you can run your main script and reload your child scripts any time with killall main_script -USR1 Here is an example run: ./trap_multiple My pid is 3124123. You know what to do ( ͡° ͜ʖ ͡°) You can also kill me with 'kill -s INT -`pgrep -f trap_multiple`' Hello from ./script1. i is 1 Hello from ./script2. i is 1 Hello from ./script2. i is 2 Hello from ./script1. i is 2 Hello from ./script2. i is 3 Hello from ./script1. i is 3 restarting scripts killing 3124125 killing 3124126 Hello from ./script1. i is 1 Hello from ./script2. i is 1 Hello from ./script2. i is 2 Hello from ./script1. i is 2 Hello from ./script2. i is 3 Hello from ./script1. i is 3 Hello from ./script2. i is 4 Hello from ./script1. i is 4 restarting scripts killing 3124304 killing 3124305 Hello from ./script1. i is 1 Hello from ./script2. i is 1 Hello from ./script1. i is 2 Hello from ./script2. i is 2 ^Cexiting killing 3124875 killing 3124876 Final words I think I am started to getting obsessed with trap command because it has such a good name and purpose. FOSS people are really on another level when it comes to naming. Here is another good one: - How can you see the contents of a file? + You cat it. - What if you want to see them in reverse order? + You tac it. No, it is not just a joke. Try it… Man I love Gnoo slash Linux. Anyway, I hope now you know how to trap and kill. Next week I will explain how to unzip; strip; touch; finger; grep; mount; fsck; more; yes; fsck; fsck; umount; clean; sleep ( ͡° ͜ʖ ͡°). * ","categories": [],
"tags": ["trap","kill","linux"],
"url": "/2023/01/15/hot-reloading-with-trap-and-kill.html",
"teaser": null
}]