« I note that after much hue and cry, and many arguments, I still do not know what color this bikeshed will be.
I feel I have been informed of the many examples of problems with colors, cultural relevance of specific hues, details of paint techniques, anecdotes of past experiences with varying colors, larger socio-economic issues reflected through color choices, philosophy of colors, philosophy *about* the philosophy of color, legal and moral issues confronted during color evaluation, the impact of other bikeshed color choices, and how specific colors (and patterns) are under-represented, the finer details of paint application personel selection, and how certain colors are representative of larger social issues being played out in microcosms in individual environments…
….but I still do not know what color this bikeshed will be.
Please advise. »
While developing a plugin for WordPress I was having trouble linting CSS files in PHPStorm. One file in particular was giving hundreds of false positives for errors related to paths:
This bothered me. I wanted to fix the reported errors but the reporting was wrong. Most of the time there was nothing to fix. After much fiddling I discovered files under a folder marked as Resource Root can be referenced as relative:
Low and behold the errors became real! Oh crud, time to fix.
In your .travis.yml file, add:
before_install: - composer require phpunit/phpunit:4.8.* satooshi/php-coveralls:dev-master - composer install --dev script: - ./vendor/bin/phpunit --coverage-clover ./tests/logs/clover.xml after_script: - php vendor/bin/coveralls -v
before_install: Calls composer and installs PHPUnit 4.8.* + satooshi/php-coveralls.
script: Calls the installed version of PHPUnit and generates a clover.xml file in
./tests/logs/clover.xml. (This XML file will be used by PHP-Coveralls.)
after_script: Launches satooshi/php-coveralls in verbose mode.
Create a .coveralls.yml file that looks like:
coverage_clover: tests/logs/clover.xml json_path: tests/logs/coveralls-upload.json service_name: travis-ci
coverage_clover: Is the path to the PHPUnit generated
json_path: Is where to output a
json_filethat will be uploaded to the Coveralls API.
service_name: Use either
Add badges to your GitHub README.md file.
[![Build Status](https://travis-ci.org/NAMESPACE/REPO.svg?branch=master)](https://travis-ci.org/NAMESPACE/REPO) [![Coverage Status](https://coveralls.io/repos/NAMESPACE/REPO/badge.svg?branch=master&service=github)](https://coveralls.io/github/NAMESPACE/REPO?branch=master)
REPO to match your GitHub repo.
Let’s start with a joke. This GitHub repository:
“It’s funny ’cause it’s true” -Homer Simpson
Text configuration files (XML, Yaml, JSON, INI, …) work when the configuration is read once, the software persists in memory, and the application doesn’t exit until the user is done.
This is not what PHP does best. Sure PHP also reads the configuration file “once” but the fundamental difference is that PHP starts and exits dozens, maybe hundreds, of times for a single user using a single application.
The metaphorical equivalent would be relaunching World Of Warcraft every time time a user clicks on something.
For PHP to be the right tool for the right job, it has to be fast. Fast for developers to develop in *and* also fast for end users. (Hooray for PHP7!)
Some clever devs get around configuration performance problems by adding extra steps such as transpiling text into pure PHP before deploying, but do these complicated solutions really serve the PHP developer and the underlying philosophy of how we write code? When it comes to PHP there is a nuanced difference between “performance” and “fast.”
Let’s talk about JSON.
Wow. Talk about language independence. No reprocessing!
The equivalent in PHP:
$php = [ 'this' => 'is', 'valid' => 'php' ];
Tada! No overhead of having to validate, process, and convert to PHP. Is it uglier? Debatable.
To be clear: XML, Yaml, JSON, and friends are fine as documents or as data to be processed by PHP. This is totally normal and sometimes even useful. 😉 Barring that, any reasonable PHP developer must conclude that configuration files cannot be a bottleneck. Not a bottleneck for speed of delivering shippable code, nor a bottleneck for acceptable performance. When choosing anything other than native PHP for configuration you are making a trade-off. Is the trade off worth it? The answer is always no. 1
But the secretary needs to be able to edit the app config live on the server and PHP is too hard for him!
But caching! But Transpiling!
But I like coding parsers!
Cool! Use your powers for docs and data, not PHP configs.
 Unless you are storing your PHP configs in Apache or Nginx as ENV variables. Then to you madam or sir, I bow down.
I thought Blackfire.io would be able to handle POST the way Xdebug does: Generate a different cachegrind file for every PHP invocation; but alas, Blackfire.io currently only does static webpages, command line, or API calls. In theory I could have used “Copy-As-cURL in your Browser” (and believe me I tried) but in practice the WordPress admin is stateless (no $_SESSION), uses check_admin_referer() all over, making whatever POST action I was copying as cURL useless.
My solution was the following hack:
cd /path/to/wordpress blackfire run wp eval-file --url=http://pressbooks.dev/helloworld/ test.php
Where `wp` is WP-CLI, `eval-file` loads and executes a PHP file after loading WordPress, `–url=` is the the current site I want to profile on a WordPress multi-site install, and `test.php` is a script containing only the functionality I want to profile.
The profiler data gave some bogus results (Ie. a lot of WP-CLI bootstrapping gets flagged as slow) but at least this was better than nothing.
In the future, it would be great if Blackfire Companion had some sort of option to profile “the next action,” or to “start profiler on submit,” or something other than reloading the current page… Ping SensioLabs?
Getting cwRsync to work with Vagrant on Windows 10 is a pain.
This tutorial is for people who have:
- Installed Vagrant (Currently 1.7.4)
- Installed cwRsync Free Edition. (Currently 5.4.1)
- Installed Git for Windows. (Currently 2.5.3)
Reading comprehension 101:
cwRsync is a standalone version of rsync for Windows that doesn’t require Cygwin to be installed. I don’t have Cygwin installed because Git For Windows includes Git Bash and this is “good enough.” With a regular standalone cwRsync installation Cygwin will never be in the PATH and Vagrant will never add the required /cygdrive prefix.
Add `C:\Program Files (x86)\cwRsync` (or wherever you installed) to your path. To avoid problems make sure this string is placed before `C:\Program Files\Git\cmd` and/or `C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin`
Add the following system variable: `CYGWIN = nodosfilewarning`
Change line ~43 from:
hostpath = Vagrant::Util::Platform.cygwin_path(hostpath)
hostpath = "/cygdrive" + Vagrant::Util::Platform.cygwin_path(hostpath)
Restart your shells to apply changes. Fiddle with your Vagrantfile. Tada!
Git for Windows is based on MinGw. cwRsync is based on Cygwin. You cannot run Vagrant & cwRsync from Git Bash because cwRsync includes it’s own incompatible SSH binary. If you try you will get the following error:
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [Receive r=3.1.0]
Instead, when launching Vagrant use Microsoft PowerShell.
While working on Pressbooks, a multi-site WordPress based web application, I noticed that some of our customers were getting blank pages in the admin section. Specifically, customers with a lot of Sites (or Books as they are known in Pressbooks).
Checking the error logs I saw that these customers were running out of memory.
PHP Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 292913 bytes) in /path/to/object-cache.php on line 212.
First, to temporarily stop the out-of-memory problem so I could profile I added the following to wp-config.php:
define( 'WP_MAX_MEMORY_LIMIT', '512M' );
Next, using Blackfire.IO I was able to determine the following:
That is, when a customer was looking at their dashboard, PHP was consuming 285MB of memory. Most of it the Memcached Object Cache plugin.
That’s weird. I’m using the latest version of the plugin, the plugin is developed by core developers, and no one has reported this before? Or so I thought! Browsing the plugin SVN I see the following change committed to trunk:
There’s a few more fixes in there as well. After installing the TRUNK version of this plugin Blackfire.IO displayed:
That’s a 273MB improvement!
It took me days to figure out this problem. It would have saved me a lot of time had I seen the new code first.
- The code in TRUNK has at least 2 bugs. (…just load the file in PHPStorm and the errors will be underlined in red)
- Redis Object Cache gives better results.
For now, this is good enough.
Two weeks ago I drank the cool-aid and switched my laptop to Windows 10.
I haven’t used Windows on my personal computer for twelve years. This is a big deal for me. I ran Ubuntu and OSX before that.
As a LAMP developer the switch has been painful. Here are my top 3 pain points:
PuTTY , an app released in 1998, is still the best option for SSH on Windows. Actually, KiTTY is but you need to run PuTTy tools like Pageant or Key Generator do anything useful. I spend too much time painstakingly converting perfectly good SSH keys into strange PPK files. I squint click through a tree of options to do the most basic of tasks like login without password.
A better SSH for Windows might be GIT-SCM. When you install this you get Git Bash which has SSH. To be honest the Git Bash terminal is open 100% of the time I am sitting at my desktop. An unfortunate island of isolation that my other Windows tools are constantly fighting against…
SSH toolkits are BSD licensed. The fact that Microsoft hasn’t included SSH in Powershell by now is simply unacceptable. If Microsoft seriously wants web developers checking out Windows 10 then this is the biggest road block or, more to the point, this is the road that will lead me back to Linux when I can’t take it anymore.
As a developer my monitor resolution is 1920 x 1080 (or higher!). In Windows 10 no matter where I start I’m pretty much guaranteed that three clicks in I time travel back to Windows NT. Tiny, ugly, anti-responsive dialogues that require toothpick like clicking to change every day web developer configurations. Come on Microsoft, even Linux isn’t this ugly in 2015!
Hyper-V support in Vagrant! This is actually the main reason I switched. Hyper-V is Microsoft’s competition to Virtualbox. Conclusion? Don’t believe the hype.
I spent days trying to get Vagrant to provision a LAMP stack using Hyper-V. I even spent $159.82 CAD to upgrade from Windows 10 Home to Pro so that I could activate this feature.
Here is a list of URLs for anyone who dares try this themselves. Maybe you’ll have better luck than me?
- Howto upgrade from Windows 10 Home to Windows 10 Pro
- Howto activate Hyper-V
- What is a Virtual Switch in Hyper-V?
- Running Ubuntu with DHCP on Hyper-V over WIFI
Hello World 10
I’m still a LAMP developer at heart, with all the Stockholm Syndrome that comes from making a living with PHP, but Microsoft is changing.
Most notably C# is now open source. In 2014 I worked a job where I coded C# and, well, I liked it. For desktop, for tablet, for command line, I actually think the .NET ecosystem is pretty great. For web? For backend? Absolutely not. That said, all things considered, I decided I could no longer simply put my fingers in my ears singing “Na na na na I can’t hear you!”
Yes, I understand the distinction between Libre and Open and at this point in my life I am willing to make the trade off. I think Microsoft is setting up for the next decade and by me switching my laptop I am making a bet.
For this to pay off Microsoft needs to accept that .NET is not the dominant web development platform and attract those developers anyways. If they make life easier for the eclectic ecosystem that is the *NIX backend, then mobile and desktop will follow.
But most of all, SSH out-of-the-box for fux sake!