For my public Git repositories, I still mostly use third party services like Bitbucket, Gitlab, Notabug, and Github. The advantage of course is that these services are well-known, and they keep traffic away from a private server or something like SDF.
On the other hand, providing your code through these services will get them data about your visitors, and it also makes your publication dependent on them. At least as a backup, it would be nice to serve your code through an independent facility, but you may not want to run a Git server (Gogs, Gitlab, etc). In principle, you can directly use a webserver, but you need to correctly set up the folders; just making a clone world-readable will not suffice.
In short, you have to run git update-server-info
in each repository,
as I recently learned from a nice little post by "Solene" with the title
How to publish a git repository on http.
I'm using the following script to update the repositories on my gits webpage, but I still need to pull all content from third party sides together; only the most important repos are already included.
( poorkyll.sh
is a script contained in my "plog" repo: it basically runs
Markdown.pl
on all *.md
files with some HTML header and body stuff,
and also generates my phlog/glog/blog entries, in combination with lynx. )
#!/bin/sh
index=index.md
wdir=yargo.sdf.org/gits
cd $HOME/html/gits
cat README.md > $index
cat << EOH >> $index
---
## repositories:
EOH
for nn in *
do if test -d $nn
then cd $nn
git update-server-info
poorkyll.sh ../y.css
cd -
echo "- [$nn]( $nn/README.html ) " >> $index
echo " \`git clone http://$wdir/$nn/.git $nn\`" >> $index
fi
done
cat << EOH >> $index
---
_(generated `date -u`)_
EOH
poorkyll.sh y.css
.:.